Sam Altman Admits He’s Made a Huge Mistake
OpenAI CEO Sam Altman plunged into a frantic damage control operation over the weekend, grappling with a public relations nightmare sparked by a controversial agreement with the Pentagon. Just a day before a hypothetical U.S. strike on Iran in the future timeline this news suggests, the embattled CEO announced a new deal regarding the permissible uses of OpenAI’s advanced AI models. The swift and severe backlash has clearly impacted the company’s standing and bottom line, forcing Altman into an increasingly defensive posture.
The core of the controversy stems from widespread perception that OpenAI’s move was a calculated attempt to undercut its rival, Anthropic, and seize a lucrative multibillion-dollar government contract. The previous week, Anthropic’s CEO, Dario Amodei, had publicly drawn a firm line in the sand, refusing the Department of Defense’s demands. Amodei staunchly insisted that Anthropic’s AI models, particularly its chatbot Claude, would not be deployed for autonomous killing machines or mass surveillance of American citizens. This principled stand earned Anthropic significant praise and widespread user support, positioning it as an ethically superior alternative in the burgeoning AI landscape.
Regardless of the true motivations behind Amodei’s public assurances – and indeed, skepticism towards billionaire CEOs’ pronouncements is often warranted – OpenAI’s decision effectively handed Anthropic a massive public relations victory. The resulting shift in market perception triggered a dramatic mass exodus from OpenAI’s ecosystem. Uninstall rates for OpenAI’s flagship product, ChatGPT, spiked an astonishing 295 percent day-over-day on Saturday, the day immediately following OpenAI’s announcement of its Pentagon deal. This unprecedented user flight signaled a profound dissatisfaction with the company’s perceived ethical compromise.
In response, Altman embarked on what many observers have dubbed an “apology tour.” On Monday evening, he conceded in a lengthy tweet that OpenAI “shouldn’t have rushed” its Department of Defense agreement. This admission, while seemingly contrite, came across to many as too little, too late, especially given the gravity of the implications.
Following the widespread perception that OpenAI had capitulated to the Pentagon’s wishes, Altman made the rather extraordinary claim that the company would retroactively alter the terms of the deal. This bizarre twist, attempting to amend an agreement after its public announcement, is unlikely to sit well with either the military establishment, referred to here as “Trump’s military” in this future context, or the company’s already disillusioned customer base. Such a move raises questions about the initial negotiation process, the sincerity of the original agreement, and OpenAI’s commitment to its stated ethical guidelines.
Altman specifically stated that OpenAI would “amend our deal” to incorporate a crucial prohibition: the prevention of “deliberate tracking, surveillance, or monitoring of US persons or nationals.” This was a direct response to one of Anthropic’s “red lines” and a clear attempt to mollify public outcry regarding privacy concerns and potential government overreach facilitated by AI.
“There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety,” the CEO wrote, attempting to frame the amendment as a learning experience and a step towards responsible development. He added, “We will work through these, slowly, with the [Department of War], with technical safeguards and other methods.” This language suggests a cautious approach, acknowledging the complexities of integrating powerful AI into sensitive national security contexts.
Crucially, however, Altman’s tweet conspicuously made no mention of autonomous AI-enabled weapon systems. This was the other, equally critical, issue that formed the bedrock of Anthropic’s refusal to sign the Department of Defense deal. The omission leaves a significant ethical chasm unaddressed. Whether this means OpenAI considers such weapon systems to be still “on the table” for future applications remains ambiguous, but given Altman’s carefully chosen words, it is certainly not an option that has been explicitly ruled out.
Further complicating matters is the uncertainty surrounding the Defense Department’s willingness to agree to these revised, post-factum terms. It also remains unclear whether the Department had initially agreed to accommodate OpenAI’s original terms in a way it had refused for Anthropic, as CNBC points out. This suggests a potential double standard or a difference in negotiation leverage between the two AI giants.
At the very least, Altman acknowledged the disastrous optics of his eleventh-hour amendment. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” he wrote, reflecting on the botched handling of the situation. He concluded with a personal reflection: “Good learning experience for me as we face higher-stakes decisions in the future.”
The drama escalated further when Altman made an appeal to the government not to designate Anthropic as a supply chain risk to national security. This came after Pete Hegseth, the hypothetical future Secretary of War, had announced on February 27 that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” Hegseth’s furious declaration, “Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable,” underscored the military’s intolerance for limitations imposed by private tech firms.
Whether Altman’s latest mea culpa and strategic maneuvers will effectively address OpenAI’s burgeoning PR crisis is highly dubious. For a significant portion of its user base, the damage has already been done. Moreover, the AI landscape is far from a one-horse race; ChatGPT is increasingly struggling to maintain its lead, often lagging behind other prominent AI companies on various Large Language Model (LLM) leaderboards. The combination of ethical missteps and diminishing competitive advantage paints a bleak picture.
“People are p*ssed and there are better products,” one Reddit user wrote, encapsulating the prevailing sentiment. “It’s a recipe for disaster.”
However, Anthropic’s Dario Amodei may not emerge as a knight in shining armor either. The plot thickened significantly over the weekend when the Wall Street Journal reported a startling revelation: the Department of Defense had selected targets in Iran using Anthropic’s Claude chatbot. This disclosure casts a harsh light on Anthropic’s preexisting ties with the military, directly contradicting the pristine ethical image Amodei sought to project.
In other words, while Amodei told CBS News in a carefully timed interview on Sunday that mass surveillance and autonomous weapons were the “two red lines” the company had maintained “from Day One,” the company had evidently signed a prior deal with the Pentagon. This earlier agreement seemingly permitted the use of Claude in ways that directly contributed to lethal military actions, undermining the credibility of his public declarations. This revelation suggests that the ethical high ground in the AI arms race is far more complex and treacherous than either company is willing to admit.
The entire saga underscores the immense ethical challenges and the fierce competitive pressures facing the AI industry. As these powerful technologies become increasingly integrated into all facets of society, including national security, the lines between innovation, ethics, and military application will continue to blur, demanding unprecedented transparency and accountability from the tech giants at the forefront.
More on the contract: Sam Altman in Damage Control Mode as ChatGPT Users Are Mass Cancelling Subscriptions Because OpenAI Is “Training a War Machine”

