A young person holds a sign at a protest against OpenAI.


Loredana Sangiuliano / SOPA Images / LightRocket via Getty Images

The Rage at OpenAI Has Grown So Immense That There Are Entire Protests Against It. OpenAI, a company once lauded for its pioneering advancements in artificial intelligence, is currently facing an unprecedented torrent of public outrage and organized protests. This surge in animosity follows the recent announcement by CEO Sam Altman last Friday, outlining a new agreement with the Department of Defense (DoD) concerning the deployment of OpenAI’s sophisticated AI systems across military applications. The deal, perceived by many as a troubling pivot towards military-industrial integration, has ignited a firestorm of criticism far exceeding any backlash the company has previously encountered.

The immediate aftermath of Altman’s announcement saw a dramatic exodus of loyal ChatGPT users. In a stark display of dissent, countless individuals declared their allegiance had shifted to Claude, the rival chatbot developed by Anthropic. Anthropic, a company founded by former OpenAI researchers, had conspicuously refused to strike a deal with the Pentagon that would grant the government unrestricted access to its AI systems – a refusal it maintained even in the face of significant governmental pressure and threats to seize its technology. This principled stand resonated deeply with the public, propelling Claude to the zenith of app store rankings, swiftly displacing OpenAI’s beleaguered chatbot. Data underscored the severity of the backlash, revealing that uninstalls of the ChatGPT app spiked by a staggering nearly 300 percent over the weekend.

The initial wave of digital defection quickly coalesced into tangible, on-the-ground activism. By Tuesday, a burgeoning movement dubbed “QuitGPT” mobilized approximately fifty protestors outside OpenAI’s San Francisco headquarters. Their demonstration, a vivid tableau of placards and impassioned speeches, articulated a wide array of critiques that transcended the immediate military deal. Protestors voiced profound concerns ranging from the potential of AI to irrevocably disrupt global job markets and exacerbate socio-economic inequalities, to its alarming environmental footprint.

Perrin Millekin, a vocal protestor, articulated the environmental grievances to *Business Insider*, stating, “AI is taking water from communities, polluting communities, and it is also increasing communities’ electricity bills. They’re not even paying for it – we are.” This sentiment highlights a growing awareness of the massive computational resources and energy required to train and run large language models, drawing a direct link between tech giants and local environmental burdens. The development of advanced AI necessitates immense data centers, which consume vast quantities of water for cooling and contribute significantly to energy grids, often reliant on fossil fuels, thus impacting climate change and local resource availability.

Beyond the tangible environmental and economic impacts, more philosophical critiques resonated through the protest. Megan Matson, a staunch refuser of all AI technologies, expressed her deep-seated apprehension to *BI*. “As soon as I saw it start showing up in visuals and imagery, I could see exactly where it heads,” Matson lamented. “It destroys journalism, it destroys art, it destroys the expression of our common humanity.” This critique touches upon the existential fears surrounding AI’s capacity to automate creative and intellectual labor, potentially eroding human originality, critical thinking, and the very fabric of cultural production. The concern is that as AI-generated content becomes indistinguishable from human output, the value of human creativity and unique expression could diminish, leading to a homogenization of culture and a loss of authentic human experience.

Even individuals from within the tech industry, typically perceived as proponents of such advancements, joined the ranks of the protestors. A 26-year-old Oakland tech worker, choosing anonymity behind a cardboard robot mask, conveyed the unusual nature of his participation to the *San Francisco Standard*. “I never go to protests. This is new for me,” he admitted. “We’re not normally political people. We’re techies, you know – we want to build stuff. What OpenAI is doing in terms of building legal mass surveillance technology for the government… is frankly, insane.” This statement underscores a significant shift: a growing discomfort within the tech community itself regarding the ethical implications and potential misuse of the technologies they help create, especially when those technologies are harnessed for state surveillance or autonomous weaponry. The idea that powerful AI tools could be repurposed to monitor citizens on a massive scale strikes a nerve, raising alarms about privacy, civil liberties, and the potential for authoritarian overreach.

The wave of anti-OpenAI sentiment was not confined to Silicon Valley. Across the Atlantic, hundreds of activists converged in London’s King’s Cross on Saturday. This area, a prominent tech hub housing the UK headquarters of OpenAI, Meta, and Google DeepMind, became the focal point for one of London’s largest anti-AI demonstrations to date. The protestors in London echoed many of the concerns raised in San Francisco, articulating a broader disillusionment with the direction of the AI industry and its perceived lack of ethical guardrails. Their collective voice highlighted international anxieties about AI governance, corporate responsibility, and the societal trajectory shaped by these powerful new technologies.

The widespread outrage clearly rattled Sam Altman. In an uncharacteristic move, he took to X (formerly Twitter) the day after the DoD deal was announced, hosting a rare “Ask Me Anything” (AMA) session to directly address his customers’ burgeoning concerns. During this session, he made a striking admission, conceding that the agreement had been “rushed” and that its “optics don’t look good.” This candid, albeit belated, acknowledgment hinted at an awareness of the public relations disaster unfolding.

By Monday, Altman was in full damage control mode. He issued a lengthy and overtly apologetic statement, announcing that OpenAI was now altering the terms of its Pentagon deal. The revised agreement, he claimed, would explicitly prohibit the use of OpenAI’s AI systems for surveilling US citizens. This specific restriction was a key “red line” over which Anthropic had reportedly fallen out with the Pentagon, highlighting OpenAI’s attempt to mollify critics by adopting a similar ethical stance on at least one front. However, a critical omission in Altman’s apology did not escape scrutiny: he made no mention of Anthropic’s other fundamental guarantee – that its AI would not be used in autonomous weapon systems. This glaring absence suggested that while OpenAI was willing to concede on surveillance, the door remained open for its technology to be integrated into lethal autonomous systems, a prospect that continues to alarm ethical AI advocates globally.

This isn’t the first instance of OpenAI facing public outcry over its collaboration with military entities. In February 2024, more than a year prior to the current controversy, dozens of activists protested near the entrance of OpenAI’s headquarters. This earlier demonstration was triggered by OpenAI’s quiet removal of a crucial stipulation from its usage policies – one that had explicitly banned military and warfare applications of its technology. Mere days after this revision, OpenAI publicly announced its collaboration with the Pentagon on several projects, signaling a clear shift in its operational ethics and an increasing willingness to engage with defense contractors. The historical pattern suggests a gradual, yet persistent, alignment with military interests, which has consistently drawn criticism from those advocating for AI’s peaceful and ethical development.

Adding another layer of complexity and concern, dissent is now emanating from within the very companies at the forefront of AI development. Nearly 1,000 workers from both OpenAI and Google have collectively signed an open letter, published on notdivided.org, demanding that their respective companies refuse the Pentagon’s requests to utilize their AI technology for mass surveillance and, crucially, for autonomous weaponry. This internal rebellion signifies a profound ethical struggle within the tech industry, where engineers and researchers are increasingly grappling with the moral implications of their work. Their collective plea underscores a desire for responsible AI development, advocating for a future where these powerful tools serve humanity rather than becoming instruments of war or widespread surveillance. The growing internal pressure suggests that the debate over AI ethics is far from settled, and the future trajectory of these technologies remains a hotly contested battleground, both in the public square and within the corporate walls of the industry’s giants.

**More on AI:***After Banning Anthropic From Military Use, Pentagon Still Relying Heavily on It in Iran War*