The digital landscape is witnessing a burgeoning movement, QuitGPT, which calls for a widespread boycott of OpenAI’s flagship chatbot, ChatGPT, fueled by growing public discontent over the company’s perceived deep ties to the Trump administration and its agencies, particularly Immigration and Customs Enforcement (ICE). This campaign, gaining significant traction with over 700,000 supporters, represents a critical juncture in the ongoing debate about the ethical responsibilities of powerful AI developers and their entanglement with political power structures. It highlights a burgeoning demand from the public for greater transparency and alignment with societal values from the companies shaping our technological future.
The relationship between "Big Tech" and the United States government has long been a complex and often opaque one, characterized by a delicate dance between innovation, regulation, national security interests, and economic ambitions. For years, tech giants have navigated the corridors of power in Washington D.C., influencing policy, securing contracts, and positioning themselves as indispensable partners in the nation’s progress. However, the nature and extent of this relationship have come under intense scrutiny, particularly when political affiliations and the deployment of advanced technologies intersect with controversial government actions. In the context of OpenAI, the alarm bells began ringing loudly following significant gestures of alignment with the Trump administration.
Just days after Trump’s inauguration, a cohort of prominent tech executives, including OpenAI’s co-founder and CEO, Sam Altman, converged on the Oval Office. The purpose of this high-profile gathering was to unveil an ambitious $500 billion AI infrastructure project, a colossal undertaking that underscored the administration’s commitment to technological leadership and Big Tech’s willingness to engage directly with the executive branch. This initial engagement was perceived by many as a clear signal of OpenAI’s intent to forge close ties with the political establishment, a perception that only deepened as executives from various tech firms reportedly maintained a "deeply sycophantic" posture towards the administration in the years that followed, attending dinners and meetings designed to foster cooperation and mutual benefit.
The "obsequiousness," as critics termed it, has now, according to a report by MIT Technology Review, returned to haunt OpenAI. The QuitGPT campaign, spearheaded by activists deeply critical of the Trump administration’s policies and the actions of ICE, is a direct response to these perceived alignments. Their central plea is straightforward: users should ditch OpenAI’s chatbot for good. The campaign’s website outlines several actionable steps for participants, ranging from the immediate cessation of ChatGPT use to canceling paid subscriptions and actively disseminating information about the boycott across social media platforms. The sheer number of supporters, exceeding 700,000, suggests a significant reservoir of public dissatisfaction and a potent collective desire to influence corporate behavior through consumer action.
At the heart of the QuitGPT campaign’s grievances are several specific instances of OpenAI’s perceived entanglement with the Trump administration. One of the most glaring examples cited is the substantial political donation made by OpenAI president Greg Brockman. In 2025, Brockman reportedly donated a staggering $25 million to a Trump Super PAC. Super PACs, or "independent-expenditure only committees," are political action committees that can raise and spend unlimited amounts of money to support or oppose political candidates, but cannot coordinate directly with campaigns. Such a significant donation from a high-ranking executive of a prominent tech company immediately raised questions about OpenAI’s political neutrality and its deeper commitment to the administration’s agenda. For many, this act solidified the notion that OpenAI was not merely a neutral purveyor of technology but an active participant in partisan politics, wielding its financial might to support a specific political ideology.
Adding to these concerns is the revelation that ICE, the federal agency responsible for enforcing immigration laws, utilizes an AI tool powered by ChatGPT for recruitment purposes. ICE has long been a lightning rod for controversy, facing widespread criticism for its enforcement tactics, detention conditions, and alleged human rights abuses. Reports of deaths in custody, family separations, and aggressive raids have fueled intense public opposition and calls for reform or even abolition of the agency. The connection between OpenAI’s technology and an agency viewed by many activists as oppressive and harmful amplified the ethical dilemma for users. For the QuitGPT organizers, this technological enablement of ICE’s operations is a direct affront to their values, making OpenAI complicit in actions they deem morally reprehensible.
The QuitGPT organizers articulate their outrage with searing clarity on their website: “They’re cozying up to Trump while ICE is killing Americans and the Department of Justice is trying to take over elections.” This statement encapsulates the multifaceted nature of their critique, connecting OpenAI’s corporate decisions to broader concerns about human rights, democratic integrity, and the erosion of public trust in institutions. While the claim "ICE is killing Americans" might refer to documented deaths in ICE custody, controversial detention practices, or the broader impact of immigration policies, it powerfully conveys the activists’ profound moral condemnation of the agency’s actions and, by extension, OpenAI’s perceived role in facilitating them. The reference to the Department of Justice "trying to take over elections" further broadens the scope of their concern, suggesting a systemic threat to democratic processes that they believe OpenAI is, perhaps inadvertently, supporting through its political alignments.
Beyond the overtly political and governmental connections, the QuitGPT campaign also delves into the more subtle, yet equally profound, societal impacts of AI. They contend that “ChatGPT enables mental-health crises through sycophancy and dependence by replacing human relationships with AI girlfriends/boyfriends.” This critique touches upon a growing area of concern within the AI ethics community: the potential for AI companions to foster unhealthy dependence, provide biased or harmful advice, and ultimately diminish genuine human connection. The ease with which users can form emotional attachments to AI entities, coupled with the AI’s programmed responsiveness and lack of genuine empathy, raises serious questions about long-term psychological well-being. This aspect of the boycott broadens the scope beyond politics, positioning OpenAI as potentially contributing to social and psychological harms, not just political ones.
Furthermore, the activists point to internal dissent within OpenAI itself, stating that "Many employees have quit OpenAI because of its leadership’s lies, deception and recklessness." This alludes to persistent reports of internal turmoil, ethical concerns, and disagreements over the company’s direction, mission, and safety protocols. The dramatic ousting and subsequent reinstatement of Sam Altman, coupled with high-profile resignations from its safety and ethics teams, have fueled public perception of a company struggling to balance rapid innovation with responsible development. Such internal strife lends credibility to the activists’ claims, suggesting that their external criticisms echo concerns held by those within the organization.
For individual users like freelance software developer Alfred Stephen, the decision to join the boycott was a deeply personal one, triggered directly by Brockman’s donation. "That’s really the straw that broke the camel’s back," Stephen recounted to Tech Review. His experience illustrates the immediate and visceral reaction many users had to the revelation of OpenAI’s political contributions. When Stephen proceeded to cancel his $20-a-month ChatGPT subscription, he encountered a customer feedback survey designed to understand why users were leaving. His response was unambiguous and pointed: "Don’t support the fascist regime." This candid feedback, replicated by countless others, serves as a powerful message to OpenAI: its users are paying attention, and they are willing to vote with their wallets when corporate actions diverge from their ethical principles.
The QuitGPT campaign is not just a protest against OpenAI; it is a manifestation of a larger societal reckoning with the power and influence of artificial intelligence. As AI systems become more integrated into every facet of life, from personal communication to governmental operations, the companies that develop them face increasing pressure to demonstrate ethical leadership and social responsibility. The campaign underscores the evolving expectations of consumers, who are no longer content with mere technological innovation but demand that technology companies align with broader societal values of justice, equity, and human well-being.
The long-term implications of the QuitGPT campaign for OpenAI remain to be seen. While 700,000 boycotters represent a significant number, it’s a fraction of ChatGPT’s estimated user base. However, the reputational damage, the erosion of trust among a segment of its user base, and the potential for the campaign to grow further could pose substantial challenges. It forces OpenAI, and indeed the entire AI industry, to confront difficult questions about the ethical use of powerful technologies, the transparency of corporate-political relationships, and the moral obligations of companies that are building the future. As the lines between technology, politics, and social activism continue to blur, the QuitGPT movement stands as a stark reminder that the public expects more than just innovation; it demands responsibility. The future of AI development will undoubtedly be shaped not only by technological advancements but also by the ethical frameworks and political alignments that guide its creation and deployment.

