"Pull the plug! Pull the plug! Stop the slop! Stop the slop!" For a few hours this Saturday, February 28, a palpable energy coursed through London’s King’s Cross, a district synonymous with technological innovation and the very epicenters of AI development. Here, within the shadow of the UK headquarters of industry giants like OpenAI, Meta, and Google DeepMind, a contingent of approximately two hundred anti-AI protesters gathered, their voices amplified by chants and the visual cacophony of their signs. This demonstration, meticulously organized by two distinct activist groups, "Pause AI" and "Pull the Plug," was heralded as the most significant mobilization of its kind to date, marking a dramatic escalation in public dissent against the rapidly advancing artificial intelligence.

The spectrum of anxieties on display was as broad as it was profound, encompassing a range of concerns that touched upon the immediate and the existential. From the proliferation of "online slop" and the dissemination of abusive imagery to the chilling prospect of autonomous "killer robots" and the ultimate threat of human extinction, the protesters articulated a multifaceted critique of AI’s trajectory. One particularly striking visual was a woman adorned with a large, homemade billboard affixed to her head, its message starkly posed: "WHO WILL BE WHOSE TOOL?" The letters "O" in "TOOL" were strategically cut out, transforming them into eyeholes that amplified the unsettling inquiry. Accompanying her were signs bearing slogans such as "Pause before there’s cause," a prescient plea for caution, and the starkly unambiguous "EXTINCTION=BAD." Another sign directly targeted Demis Hassabis, the CEO of Google DeepMind, labeling him "Demis the Menace," while a simple yet potent message declared, "Stop using AI."

Amidst the organized march, a poignant moment unfolded as an older gentleman, his presence marked by a sandwich board proclaiming "AI? Over my dead body," shared his deep-seated concerns. He articulated a fear rooted in the potential for widespread unemployment, lamenting, "It’s about the dangers of unemployment. The devil finds work for idle hands." His words echoed a long-standing societal apprehension about technological advancement displacing human labor, a concern now amplified by the unprecedented capabilities of AI.

These anxieties are far from novel. For years, researchers and ethicists have sounded alarms about the potential harms, both tangible and speculative, associated with generative AI. Models like OpenAI’s ChatGPT and Google DeepMind’s Gemini have been at the forefront of these discussions, their rapid development fueling both wonder and apprehension. What has demonstrably shifted, however, is the emergence of organized protest movements capable of galvanizing substantial crowds, translating these abstract concerns into tangible public demonstrations.

The author’s prior encounters with anti-AI sentiment had been far more subdued. The first instance, in May 2023, involved a mere handful of individuals heckling an audience of hundreds outside a London lecture hall where Sam Altman, CEO of OpenAI, was speaking. A subsequent protest in June of the previous year, organized by Pause AI outside Google DeepMind’s London office, attracted only a few dozen participants. Saturday’s event, therefore, represented a significant and undeniable escalation in the scale and visibility of the anti-AI movement.

Joseph Miller, who leads the UK branch of Pause AI and was a co-organizer of the march, articulated the group’s growing influence. In a pre-protest conversation, he stated, "We want people to know Pause AI exists. We’ve been growing very rapidly. In fact, we also appear to be on a somewhat exponential path, matching the progress of AI itself." Miller, a PhD student at Oxford University specializing in mechanistic interpretability—a nascent field dedicated to understanding the inner workings of Large Language Models (LLMs)—expressed profound concerns about the potential for AI to become irrevocably beyond human control, with potentially catastrophic consequences.

He elaborated that the threat didn’t necessarily stem from a singular, malevolent "rogue superintelligence." Instead, he posited, a more immediate danger could arise from human error. "You just needed someone to put AI in charge of nuclear weapons," he warned. "The more silly decisions that humanity makes, the less powerful the AI has to be before things go bad." His sobering observation gained particular resonance in light of a recent development: the US government’s attempt to compel Anthropic to allow its LLM, Claude, for "legal" military purposes, a move that Anthropic resisted, while OpenAI reportedly entered into a similar agreement with the Department of Defense. OpenAI, when approached for comment regarding the protest, declined.

Matilda da Rui, another member of Pause AI and a co-organizer of the march, views AI as the ultimate challenge confronting humanity. She believes that AI holds the potential to either resolve all other global problems definitively or, conversely, to lead to human extinction, leaving no one to face any further issues. "It’s a mystery to me that anyone would really focus on anything else if they actually understood the problem," she conveyed, underscoring the perceived urgency of the AI threat.

Despite the gravity of the issues being raised, the atmosphere at the march was surprisingly convivial, even enjoyable. There was a notable absence of overt anger, and the sense that species-level survival was at stake seemed subdued. This could be attributed, in part, to the diverse range of concerns and demands brought forth by the participants.

A chemistry researcher encountered by the author presented a wide-ranging critique, touching upon both the speculative and the practical. Their concerns included the unsubstantiated notion that data centers emit infrasound, subtly inducing paranoia in nearby residents, and the more grounded observation that the rampant spread of AI-generated "slop" online is increasingly making it difficult to locate reliable academic sources. Their proposed solution was straightforward: to criminalize profiting from AI technology, asserting, "If you couldn’t make money from AI, it wouldn’t be such a problem."

A prevailing sentiment among many interviewed was a pragmatic skepticism regarding the immediate impact of such protests on the tech industry. Maxime Fournes, the global head of Pause AI, who was present at the march, admitted, "I don’t think that the pressure on companies will ever work. They are optimized to just not care about this problem." Fournes, who spent 12 years working within the AI industry before joining Pause AI, outlined a more indirect strategy: "We can slow down the race by creating protection for whistleblowers or showing the public that working in AI is not a sexy job, that actually it’s a terrible job—you can dry up the talent pipeline."

The overarching hope of most protesters was to foster widespread public awareness, leveraging this heightened consciousness to advocate for robust government regulation. The organizers had intentionally framed the march as a social event, actively encouraging individuals curious about the cause to attend. This inclusive approach appeared to have been successful. The author met a finance professional who had attended with his roommate, citing a casual reason: "Sometimes you don’t have that much to do on a Saturday anyway. If you can see the logic of the argument, it sort of makes sense to you, then it’s like ‘Yeah, sure, I’ll come along and see what it’s like.’”

This individual further reflected on the unique nature of the anti-AI movement, suggesting that it’s challenging for people to fundamentally oppose the core concerns being raised. He drew a contrast with other social movements, like pro-Palestine protests, where differing viewpoints are more common. "With this," he observed, "I feel like it’s very hard for someone to totally oppose what you’re marching for."

Following its route through the bustling King’s Cross district, the march culminated in a church hall in Bloomsbury. Inside, tables and chairs were arranged in neat rows, creating a space for connection and collaboration. Protesters affixed stickers bearing their names to their chests, initiating tentative introductions with their neighbors. Their stated purpose was to convene and strategize on "how to save the world." However, with a train to catch, the author departed, leaving them to their important, world-altering discussions. The event, while perhaps not immediately altering the course of AI development, undeniably marked a significant step in bringing public discourse and organized dissent to the forefront of the global AI conversation.