The opulent setting of the Vanity Fair after-Oscars bash, a glittering constellation of Hollywood’s elite, became an unexpected battleground for ethical debate last Sunday, as OpenAI CEO Sam Altman found himself fiercely confronted over his company’s controversial deal with the Department of Defense, widely referred to by critics as the "Department of War." Amidst the clinking of champagne glasses and the murmur of industry gossip, a stark spotlight was cast on the burgeoning tension between technological innovation, corporate responsibility, and the fraught landscape of global conflict.
The party, an annual pilgrimage for A-listers and cultural tastemakers, saw luminaries such as Michael B. Jordan, Timothée Chalamet, Kylie Jenner, Teyana Taylor, and Zoe Saldaña mingling, largely oblivious to the brewing storm. Altman’s presence at such a high-profile entertainment event has, for years, been a subject of quiet speculation. As the architect of OpenAI, a company synonymous with cutting-edge artificial intelligence, his foray into Hollywood’s inner sanctum is no secret. He has openly cultivated relationships with film executives, producers, and creatives, driven by an ambition not just to see AI tools enhance filmmaking, but reportedly to directly back AI-animated features, positioning himself as a new-era mogul poised to reshape the very fabric of storytelling. This persistent courtship of Hollywood, a strategic move to secure future content partnerships and potentially influence public perception of AI, nonetheless placed him squarely in a world where ethical lines are often drawn sharply and publicly.
It was within this charged atmosphere that acclaimed playwright and screenwriter Jeremy O. Harris, a Tony Award winner for "Slave Play" and co-writer of the critically lauded indie film "Zola," made a direct and dramatic beeline for Altman. Harris, known for his provocative and incisive social commentary, wasted no time in launching into a scathing verbal assault. According to multiple sources present, Harris accused Altman of being the "Goebbels of the Trump administration," a profoundly grave and shocking comparison rooted in the darkest chapters of human history. The accusation, delivered amidst the festive din, momentarily silenced those in earshot, underscoring the raw intensity of the sentiment. Altman, by all accounts, maintained a remarkable composure, responding calmly to the incendiary remarks, though the nature of his reply was not widely reported.
The gravity of Harris’s initial comparison to Joseph Goebbels, Adolf Hitler’s notorious Minister of Propaganda, cannot be overstated. Goebbels was the architect of the Nazi regime’s vast disinformation campaigns, wielding media and rhetoric as weapons to manipulate public opinion, incite hatred, and justify atrocities. His role was central to the dehumanization efforts that paved the way for genocide. The comparison immediately drew a parallel between OpenAI’s deal with the Pentagon and a perceived complicity in shaping narratives that could facilitate conflict or obscure truth. However, in a subsequent email to Page Six, Harris issued a partial apology, clarifying that his choice of Nazi collaborator was, in his inebriated state, a misstep. "It was late and I had a few too many martinis so I misspoke when I said Goebbels… I should’ve said Friedrich Flick," Harris stated, revealing an even more pointed and arguably more fitting historical analogy for his critique.
Friedrich Flick was a titan of German industry during the Nazi era, an "uber-rich German industrialist" whose sprawling business empire encompassed iron, steel, coal, cars, chemicals, aircraft, and arms. He became one of the Nazi regime’s largest suppliers, his factories and mines operating as crucial cogs in the Third Reich’s war machine. At the Nuremberg Trials, Flick was convicted of war crimes, specifically for his egregious use of Russian slave labor in his businesses, exploiting human suffering for immense profit and contributing directly to the perpetuation of the war. By invoking Flick, Harris pivoted his criticism from propaganda to the accusation of industrial-military complicity, positioning Altman not as a propagandist, but as a modern-day war profiteer, supplying advanced technological tools to a "war-mongering government." The implication was clear: OpenAI, under Altman’s leadership, was being seen as a corporate enabler of military aggression, profiting from the potential for conflict.
The catalyst for Harris’s outburst and the widespread outrage was OpenAI’s late February announcement of a new agreement with the Department of Defense. This deal, a significant departure from the company’s previously stated mission of developing AI for the benefit of humanity, involved deploying its advanced AI systems across various military applications. While specific details of the deployment remained somewhat opaque, the general understanding was that OpenAI’s powerful language models and analytical tools would be integrated into areas like data analysis, logistics, threat assessment, and potentially even command and control systems. The announcement was met with immediate and fierce backlash from the public, AI ethicists, and even within OpenAI’s own ranks.
The timing of this announcement could not have been more volatile. Hardly a day after Altman’s revelation, the international community was rocked by news that the Trump administration had ordered a barrage of deadly airstrikes in Iran. These strikes, reportedly targeting key strategic locations, culminated in the death of Iran’s supreme leader, Ali Khamenei. In the chaotic aftermath, the humanitarian toll was staggering, with initial reports from agencies like PBS NewsHour indicating that upwards of 1,000 civilians had perished in Tehran alone. This immediate and devastating geopolitical event, occurring almost simultaneously with OpenAI’s military pact, painted a grim picture for many: the perception that cutting-edge AI, developed by a company initially lauded for its ethical ambitions, was now directly empowering an administration to wage war with catastrophic civilian consequences.
Adding further fuel to the fire was the contrasting stance of OpenAI’s rival, Anthropic. Known for its "Constitutional AI" approach, which embeds ethical principles directly into its AI models, Anthropic had steadfastly refused to enter into a similar deal with the military that would grant unrestricted access to its AI systems. This refusal came despite what were described as "weighty threats" from the administration, including veiled warnings of government seizure of its technology. Anthropic’s principled stand, rooted in its commitment to preventing AI misuse in autonomous weaponry and mass surveillance, sharply highlighted OpenAI’s perceived capitulation. The public reaction was swift and decisive: Anthropic’s Claude chatbot rapidly surpassed OpenAI’s ChatGPT at the top of app store rankings, protests erupted outside OpenAI’s headquarters, and hundreds of its own employees signed an open letter, urgently demanding that their employer retract the Pentagon deal and commit to stringent ethical guardrails.
In the week that followed, Sam Altman went into full damage control mode. He publicly apologized for the Pentagon deal being "rushed," acknowledging the lack of transparency and the concerns it raised. Behind the scenes, at an all-hands meeting, he vigorously defended his decision to collaborate with the military, attempting to reassure a restive workforce and calling the backlash "really painful." Crucially, he updated the agreement to incorporate significant "redlines," precisely the ethical stipulations that Anthropic had insisted upon before their negotiations collapsed. These guardrails included explicit restrictions against using OpenAI’s AI in autonomous weaponry without human oversight and a categorical prohibition on its deployment for the mass surveillance of US citizens. While these amendments were a crucial step towards addressing immediate ethical concerns, critics argued they were reactive, born out of public pressure rather than proactive ethical foresight. The very need for such retrospective adjustments underscored the initial haste and lack of comprehensive ethical consideration in the original agreement.
The fallout from Altman’s military deal continues to reverberate, not only in public discourse but also within OpenAI itself. A prominent executive, Caitlin Kalinowski, a veteran leader within the company, resigned in protest. Kalinowski explicitly criticized Altman’s "rushed deal" for its failure to adequately define "key guardrails" around its AI technology, echoing the concerns of many who feared the ethical implications of deploying such powerful tools without robust safeguards. Her departure underscored the deep internal divisions and moral dilemmas faced by employees grappling with the company’s shifting ethical landscape.
The confrontation at the Vanity Fair party, therefore, was far more than a drunken celebrity outburst. It was a potent symbol of the escalating public scrutiny facing the leaders of the artificial intelligence revolution. As AI continues its rapid advance, its creators are increasingly being held accountable for the real-world applications and consequences of their technologies. The ethical debate surrounding AI and warfare, the role of tech companies in national security, and the delicate balance between innovation, profit, and societal well-being are no longer abstract academic discussions. They are now playing out in the most public of arenas, from protest lines to the hallowed halls of Hollywood’s most exclusive parties, signaling a new era where technological power carries an unprecedented weight of moral responsibility. The incident served as a stark reminder that in a world increasingly shaped by AI, the decisions made by a few powerful individuals have profound implications for humanity’s future, and the public is watching, ready to confront those who they believe cross ethical lines.

