The recent India AI Summit in New Delhi, an event designed to showcase the nation’s burgeoning tech ambitions and foster global collaboration in artificial intelligence, inadvertently became the stage for a palpable display of Silicon Valley’s deepest rivalries, particularly between the titans leading OpenAI and Anthropic. On a seemingly innocuous Thursday, as the event drew towards a close, the assembled industry and political leaders, including India’s charismatic Prime Minister Narendra Modi, found themselves in a line, poised for a symbolic gesture of unity. Modi, ever the showman, beckoned the assembled executives to join hands and raise them aloft, a visual metaphor for a united front marching towards an AI-powered future. However, what unfolded was a stark illustration of the deep-seated animosity and divergent philosophies that underscore the AI landscape, as Sam Altman, CEO of OpenAI, and Dario Amodei, CEO of Anthropic, conspicuously failed to comply.
The moment, captured on video and quickly disseminated across social media, was a masterclass in awkwardness. As Modi extended his invitation for a collective hand-holding, a dozen prominent figures readily complied, beaming for the cameras. Yet, in the space between Altman and Amodei, an invisible wall seemed to materialize. Instead of clasping hands, there was a fleeting, almost imperceptible exchange of strained eye contact. Amodei, appearing visibly uncomfortable, cast his gaze around, a theatrical "who, me?" gesture that, in another context, might have elicited laughter but here only amplified the tension. His body language conveyed a clear reluctance, a searching glance perhaps for an escape or a justification. Altman, for his part, initially held his free hand tentatively in front of his chest, a gesture of confusion or perhaps cautious anticipation, before ultimately mirroring Amodei’s non-compliance. Both men, instead of linking hands in solidarity, chose to raise closed fists, a defiant, if subtle, rejection of the requested gesture. The scene, later dubbed a "cringe masterpiece" by one Redditor, laid bare the deep fissures within an industry often presented as a unified force for progress. Amodei’s expression remained unenthusiastic, while Altman’s initial bewilderment quickly settled into a similar, guarded resolve.
This public display of disunity was all the more striking given the broader context of the summit. India AI was conceived as a platform for global leaders to discuss the future of AI, its ethical implications, and its potential to transform societies. Prime Minister Modi, a figure who has faced significant international criticism for his government’s authoritarian tendencies and suppression of dissent, was seeking to align himself with the transformative power and positive image of AI, presenting India as a hub for innovation. The image Modi sought to project was one of harmonious collaboration between political leadership and technological pioneers, all working towards a common, beneficial goal. The sight of these smiling executives, gathered like children about to play "ring around the rosie," seemed to promise a utopian vision. The refusal of Altman and Amodei to participate in this charade shattered that illusion, revealing the very human, and often contentious, rivalries simmering beneath the surface of the industry’s polished facade.
The rivalry between OpenAI and Anthropic is not merely a business competition; it’s rooted in a profound ideological schism that traces back to Anthropic’s very genesis. Anthropic was founded by a splinter group of former OpenAI employees, including Dario Amodei and his sister Daniela Amodei, who departed from OpenAI due to fundamental disagreements over the company’s direction. Their primary concern revolved around OpenAI’s perceived shift away from its original non-profit, safety-first mission towards a more aggressive, commercialized trajectory. The defectors believed that OpenAI was prioritizing rapid development and market dominance over robust AI safety and alignment research, particularly as it began to pursue lucrative partnerships and significant investment from Microsoft. Anthropic, in contrast, was established with a core mandate to prioritize AI safety, ethics, and alignment research above all else, aiming to build powerful AI systems that are "harmless, helpful, and honest." This philosophical divergence, often framed as "safety-first" versus "move fast and break things," has defined their relationship ever since.
In recent months, this long-standing tension has escalated dramatically, spilling into the public arena in increasingly overt ways. A prime example was Anthropic’s pointed campaign during the Super Bowl. While not explicitly naming OpenAI or ChatGPT, Anthropic’s series of advertisements subtly but unmistakably critiqued the trend of integrating advertisements and commercial content into AI models. These ads, widely interpreted as thinly veiled digs at OpenAI’s decision to monetize ChatGPT through various subscription tiers and potentially ad placements, aimed to position Anthropic’s Claude as a cleaner, more user-centric alternative, free from the distractions and potential biases introduced by commercial interests. The campaign clearly struck a nerve with Sam Altman, who launched an "unbecoming" lengthy rant on X (formerly Twitter). In a series of posts, Altman accused Anthropic’s ads of being "deceptive" and went so far as to label Anthropic an "authoritarian company." This outburst, characterized by many as a "mini-meltdown," revealed a raw vulnerability and defensiveness that belied his concurrent claim that he found the ads "funny." His public reaction underscored the depth of the animosity and the effectiveness of Anthropic’s strategic jab.
Beyond marketing battles, the two companies are also locked in a high-stakes war over the very future of AI regulation. This legislative struggle represents perhaps the most critical front in their rivalry, with profound implications for how AI will be developed and governed globally. Just last week, Anthropic announced a significant commitment of $20 million to a super PAC specifically formed to advocate for stronger AI regulation. This move was a direct counter to another super PAC, backed by key OpenAI figures and investors, which has been lobbying for a more laissez-faire, industry-friendly regulatory environment. The "OpenAI-aligned" super PAC generally argues that overly stringent regulations could stifle innovation and hinder American competitiveness in the global AI race, advocating for a more self-regulatory approach or minimal government intervention. Anthropic, on the other hand, is championing a regulatory framework that emphasizes robust safety guardrails, independent audits, transparency requirements, and mechanisms to prevent catastrophic risks. This includes advocating for specific legislative measures that would mandate rigorous testing, responsible deployment protocols, and accountability for AI systems. The $20 million commitment from Anthropic will be strategically deployed to support political campaigns of candidates who align with their vision for stronger AI governance, particularly ahead of upcoming midterm elections, effectively turning the battle for AI’s future into a political spending war. The outcome of this regulatory clash will not only shape the competitive landscape for OpenAI and Anthropic but also determine the ethical and safety standards for the entire AI industry for decades to come.
The "cringe masterpiece" in New Delhi, therefore, was far more than a fleeting moment of social awkwardness. It was a potent symbol of a deeper, multi-faceted conflict playing out at the highest echelons of the AI world. It highlighted the fundamental disagreements over AI’s purpose, its development trajectory, and its societal integration. The refusal to hold hands, however minor in isolation, spoke volumes about the lack of genuine unity and shared vision between these two pivotal players. It underscored the challenge of fostering collaboration in an industry marked by intense competition, philosophical divides, and immense financial stakes. For an industry that promises to solve humanity’s greatest problems and usher in an era of unprecedented progress, such public displays of discord raise uncomfortable questions about its internal maturity, its ability to self-govern, and its capacity to genuinely collaborate on critical issues like AI safety and responsible development. As the AI industry continues its rapid ascent, these internal battles, played out on both global stages and digital forums, will undoubtedly shape its trajectory, for better or worse. The blinking new warning sign for the AI industry is not just about external threats or technical challenges, but also about the profound internal divisions that threaten to derail a unified, responsible path forward.

