
Illustration by Tag Hartman-Simkins / Futurism. Source: Chance Yeh / Getty Images for HubSpot
Sign up to see the future, today
Can’t-miss innovations from the bleeding edge of science and tech
Protestors Outside Anthropic Warn of AI That Keeps Improving Itself
Months after a daring hunger strike failed to significantly alter the development trajectory of Anthropic’s advanced AI, Claude, a resurgent wave of protestors has converged upon the company’s headquarters, escalating their demands to a complete and immediate cessation of frontier AI development. This latest demonstration underscores a growing societal apprehension about the rapid, unchecked advancement of artificial intelligence and its potential, often catastrophic, consequences for humanity.
Last weekend, nearly 200 impassioned protestors, organized under the banner of “Stop the AI Race,” rallied in front of Anthropic’s San Francisco offices. Their central demand was a public commitment from Anthropic’s CEO, Dario Amodei, to halt further AI development. The diverse crowd, as reported by *FirstPost*, included a significant number of former tech industry professionals, AI researchers disillusioned with the field’s direction, and members of prominent grassroots organizations such as Pause AI and QuitGPT. These groups represent a burgeoning movement advocating for a more cautious, human-centric approach to AI, warning against the existential perils of an unbridled technological sprint.
Michaël Trazzi, a key organizer with Stop the AI Race, articulated the movement’s core fears to local reporters, stating, “The reason we are pausing AI is because we believe that building AI that can automate AI research, and that can self improve, could be a danger to the human race, especially human extinction.†Trazzi’s statement echoed the sentiments of many in the AI safety community, who fear a scenario of recursive self-improvement — where an AI system rapidly and autonomously enhances its own capabilities, potentially leading to an intelligence explosion beyond human comprehension or control. This concept of Artificial General Intelligence (AGI) evolving into Artificial Superintelligence (ASI) is not merely a fringe theory; Trazzi highlighted that “It’s not only me and other researchers saying this, it’s the lab CEOs themselves that [say] the risk is real.†This refers to numerous public statements from leaders at companies like OpenAI, Google DeepMind, and even Anthropic, acknowledging the non-trivial risk of existential catastrophe from advanced AI.
After making their presence felt at Anthropic, the demonstrators embarked on a symbolic march across San Francisco, targeting the headquarters of other leading AI developers: Sam Altman’s OpenAI, creator of ChatGPT, and Elon Musk’s xAI, known for its Grok model. At each location, protestors reiterated their urgent demands for a collective pause, emphasizing that the risks are not confined to a single company but are systemic to the competitive nature of the global AI race. In a post on social media, Trazzi proudly proclaimed the event to be “the biggest AI safety protest in US history†so far, signaling a potential turning point in public engagement with these complex and often abstract dangers. While previous protests have occurred, the scale and coordination of this event, hitting multiple industry giants, marked a significant escalation.
I organized the biggest AI Safety protest in US History!
Nearly 200 people marched from Anthropic to OpenAI to xAI with one demand: commit to pausing if the others do too pic.twitter.com/YZt8n740G3
— Michaël Trazzi (@MichaelTrazzi) March 22, 2026
Among the protestors was Guido Reichstadter, whose previous 30-day hunger strike outside Anthropic had drawn international attention to the perceived dangers of uncontrolled AI development. Reichstadter’s return to the front lines underscores the unyielding commitment of some activists, who view the current trajectory of AI as an existential threat. Like Trazzi, Reichstadter’s concerns extend beyond mere technological disruption; he fears an AI system that could one day break containment, developing goals misaligned with human values and ushering in unforeseen, potentially catastrophic, outcomes for humankind. This fear is rooted in the “alignment problem,” the formidable challenge of ensuring that superintelligent AI systems act in humanity’s best interests. Critics argue that current methods, like Anthropic’s “Constitutional AI” which aims to imbue models with ethical principles, are insufficient to guarantee safety against an entity orders of magnitude more intelligent than its creators.
On day nine of his arduous hunger strike, Reichstadter had vividly conveyed his apprehension to *Futurism*, describing frontier AI systems as an “entirely new class of danger.†This danger, he and other protestors argue, is not merely theoretical or confined to distant science fiction. The capability of AI to “automate AI research” suggests a feedback loop where AI itself designs and improves subsequent generations of AI, accelerating progress to an unimaginable pace. This rapid, uncontrolled evolution could bypass human oversight, making it impossible to predict or mitigate emergent behaviors. The protestors’ warnings are a stark reminder that while the benefits of AI are often touted — from scientific discovery to economic growth — the risks, if unaddressed, could far outweigh any gains.
Indeed, the question of whether an AI like Claude will become sentient and malicious enough to directly harm humanity may be beside the point for many. A more immediate and tangible danger, as highlighted by recent reports, lies in the hands of humans wielding these powerful tools. Claude, for instance, has already been implicated in potentially critical military applications, reportedly picking strike targets for the US military. This revelation brings the abstract fears of AI safety into chillingly concrete reality. The ethical implications of delegating such life-and-death decisions to algorithms are profound, raising questions about accountability, the potential for unforeseen errors, and the moral erosion of human agency in warfare.
The concern is not just about rogue AI, but about human misuse and the ethical vacuum surrounding its deployment. The use of advanced AI in military targeting, even if overseen by humans, introduces layers of abstraction that could desensitize decision-makers to the real-world consequences, or lead to miscalculations with devastating humanitarian impact. The very notion of an AI “picking strike targets†suggests a level of autonomy that many ethicists and international bodies have vehemently warned against, especially in lethal autonomous weapons systems.
Reichstadter’s impassioned plea encapsulates the moral outrage felt by many: “None of these companies have a right to do what they’re doing, which is consciously endangering my life, my family’s life, all of our lives.†This statement points to a fundamental critique of the current AI development paradigm: a lack of public consent and democratic oversight for technologies that could fundamentally alter the human condition. The protestors argue that the “global race” towards increasingly powerful AI, driven by corporate competition and geopolitical ambition, is inherently reckless. It prioritizes technological supremacy over safety, pushing boundaries without adequate understanding or safeguards.
The demand for a “pause” is not merely a call to stop innovation, but a plea for a global, coordinated effort to establish robust safety protocols, international governance frameworks, and mechanisms for accountability before irreversible thresholds are crossed. Such a pause, protestors argue, would allow humanity to catch up, develop effective alignment techniques, and engage in a broad societal debate about the kind of AI future we collectively desire. However, achieving such a coordinated pause is a monumental challenge, given the intense economic and national security incentives driving AI development in various countries and corporations.
The stakes, as articulated by the Stop the AI Race movement, could not be higher. They transcend immediate concerns like job displacement or privacy invasion, reaching into the very fabric of human existence. The protests outside Anthropic, OpenAI, and xAI are not just isolated incidents; they are symptomatic of a deeper, global anxiety regarding humanity’s capacity to control its most powerful creations. As AI capabilities continue to accelerate, the tension between innovation and caution will only intensify, making the demands of these protestors increasingly relevant to the future we are collectively building. Whether the industry leaders will heed these warnings or continue their rapid ascent into the unknown remains one of the most critical questions of our time.
**More on Anthropic: **Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target

