Just under three and a half years after its watershed moment with the launch of ChatGPT, OpenAI, once hailed as the vanguard of artificial intelligence, appears to be in a precarious and increasingly unrecognizable state. While the company gears up for a potential initial public offering (IPO) later this year, boasting a staggering valuation of up to $1 trillion—a meteoric leap from a mere $29 billion in January 2023—a cascade of bad news and controversies has plagued the Sam Altman-led firm throughout 2026, raising profound questions about its long-term viability, ethical compass, and ability to contend with an increasingly formidable competitive landscape.

The bruising year kicked off in late February with OpenAI’s controversial decision to snap up a lucrative Department of Defense contract, a deal notably shunned by competitor Anthropic. Anthropic’s CEO, Dario Amodei, had explicitly stated his company’s firm red lines: their advanced AI models should not be used for mass surveillance of Americans or the development of autonomous weapon systems. The Pentagon’s refusal to agree to these ethical safeguards led Anthropic to walk away, making their principled stand a stark contrast to OpenAI’s subsequent acceptance. This move immediately plunged OpenAI into a public relations nightmare. Sam Altman himself later conceded that the acquisition "looked opportunistic and sloppy," but the damage was already done. The widely criticized deal triggered a significant backlash, manifesting in a mass exodus of users, with uninstall rates for ChatGPT spiking overnight. This episode not only tarnished OpenAI’s image but simultaneously elevated Anthropic, positioning it as a more ethically conscious and trustworthy alternative at a crucial juncture when its own AI models were beginning to demonstrably pull ahead in performance, particularly among discerning programmers and developers. The loss of public trust, combined with a perceived ethical compromise, represented a significant blow to a company that had previously benefited immensely from its public-facing image of innovation and societal benefit.

Less than a month later, OpenAI announced the abrupt discontinuation of its highly anticipated text-to-video AI application, Sora. Initially touted as a groundbreaking innovation capable of generating realistic and imaginative video scenes from text prompts, Sora quickly became an "unholy abomination" in the eyes of many, riddled with issues ranging from widespread copyright-infringing material to the production of what critics dubbed "mindless AI slop." The decision to kill Sora, as reported by the Wall Street Journal, was driven by the company’s desperate need to free up vast computing resources to power its next-generation models. This implicit admission underscored a growing internal pressure: OpenAI was struggling to keep pace, and the computational demands of its ambitious projects were proving unsustainable, further suggesting that competitors like Anthropic were indeed starting to "eat its lunch" in the race for AI supremacy.

The ramifications of the Sora shutdown extended beyond internal resource allocation, creating a significant partnership crisis. Disney, a major entertainment conglomerate, had just signed a staggering $1 billion contract with OpenAI in December, reportedly with projects linked to Sora in mind. According to Reuters, executives from both companies had been in a meeting discussing a Sora-related venture a mere 30 minutes before the news broke that the app was being "shanked." This blindsiding maneuver not only jeopardized a massive commercial partnership but also highlighted an alarming lack of internal coordination, strategic foresight, and external communication within OpenAI, suggesting a chaotic and incredibly messy operational environment behind the scenes. Such missteps can severely impact future collaboration opportunities and investor confidence.

Compounding these operational and ethical challenges is a growing financial quagmire. While OpenAI optimistically projects reaching $100 billion in advertising revenue by 2030, its current financial predicament makes this figure appear wildly speculative, warranting a massive grain of salt. The company’s spending continues to vastly outpace its relatively meager revenue streams, primarily driven by the colossal costs associated with training and operating advanced AI models, acquiring top talent, and investing in cutting-edge research and development. In a telling move in February, OpenAI was forced to revise its ambitious $1.4 trillion infrastructure commitments through 2030 down to a more "modest" $600 billion—less than half of its original plan. This significant reduction, while perhaps a move towards fiscal realism, also signals a substantial scaling back of its long-term vision and capabilities, raising questions about its ability to maintain a competitive edge when rivals are aggressively investing. The challenge of monetizing AI, especially with the proliferation of open-source alternatives and intense competition driving down service costs, remains a critical hurdle that OpenAI has yet to convincingly overcome. The pressure to present a less "heart attack-inducing" balance sheet to prospective investors ahead of an IPO is palpable, yet the underlying economics remain deeply challenging.

Adding to the sense of instability, OpenAI has also seen a recent exodus of key executive talent. Fidji Simo, the CEO of applications, who had been leading the charge to "cut the fat" and refocus the company on coding and enterprise solutions, unexpectedly announced earlier this month that she would be taking medical leave. This departure, irrespective of its personal nature, comes at a critical time when strategic leadership is paramount. Similarly, the company’s chief marketing officer, Kate Rouch, had also stepped down to focus on her health and recovery from cancer. While both departures are attributed to health reasons, their timing during a period of intense internal and external pressure inevitably raises concerns about the stability of the leadership team and the cumulative impact on strategic direction and operational execution. In a high-stakes, fast-moving industry, such absences can create significant voids and exacerbate existing challenges.

Capping off this tumultuous period, a lengthy and highly critical investigative piece from The New Yorker provided a damning summary of the precarious situation OpenAI finds itself in. The article painted an unflattering picture of CEO Sam Altman, with numerous tech insiders portraying him as a "relentless liar and master manipulator." More critically, the piece highlighted a concerning perceived lack of technical knowledge, with sources claiming Altman "lacks technical knowledge in both programming and machine learning expertise." This revelation, if true, is particularly alarming for the CEO of a company at the forefront of AI innovation. The ability to effectively lead and strategize in such a technical domain typically requires a deep understanding of its core principles. This scrutiny of Altman’s leadership echoes the dramatic boardroom coup attempt in November 2023, where he was briefly ousted, suggesting a persistent pattern of internal mistrust and governance issues that continue to plague the organization.

In short, the confluence of ethical compromises, strategic missteps, financial strain, executive instability, and leadership questions paints a clear picture of a company in significant distress. OpenAI, which once seized an early and commanding lead in the AI race, now appears to be struggling as its business decisions consistently fail to land, and its ambitious projections clash with grim financial realities. With an IPO looming on the horizon, all signs point toward a desperate scramble to present a more favorable, albeit potentially manufactured, balance sheet to investors who are already expressing heightened concerns about a potential "AI bubble popping." As one senior executive at Microsoft, a key partner, starkly put it to The New Yorker: "I think there’s a small but real chance [Sam Altman is] eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer." Such a devastating assessment from within the industry underscores the gravity of the situation and the profound skepticism that now shadows OpenAI’s future. The very company that promised to unlock humanity’s potential with AI now faces an existential crisis, battling not just external competitors but also internal turmoil and a rapidly eroding foundation of trust and financial stability. The question is no longer about its meteoric rise, but whether it can avoid a spectacular fall.