OpenAI CEO Sam Altman’s carefully constructed public persona, often oscillating between the thoughtful steward of a potentially dangerous technology and the relentless entrepreneur racing to dominate the market, was once again on full display at the recent BlackRock Infrastructure Summit, where he candidly acknowledged AI’s profound and unsettling impact on the fundamental relationship between capital and labor, yet offered no concrete path forward for mitigating the social upheaval he implicitly foresees. This duality defines much of Altman’s public narrative; he frequently engages with top lawmakers and global leaders, ostensibly advocating for responsible AI development, while simultaneously pushing OpenAI to the bleeding edge of commercialization, often with reported consequences for societal well-being, as seen in concerns ranging from the spread of misinformation to potential job displacement and even tragic incidents linked to AI chatbots. His strategy often involves acknowledging uncomfortable truths without committing to tangible solutions, a form of "AI-washing" that frames complex societal challenges as inevitable forces rather than outcomes of specific corporate and political choices.
At the BlackRock summit, Altman addressed what he termed AI’s "public relations crisis," admitting that the burgeoning technology is indeed upending the age-old dynamic between capital and labor. He noted a prevalent trend where "data centers are getting blamed for electricity prices hikes. Almost every company that does layoffs is blaming AI, whether or not it really is about AI." This observation points to the phenomenon of "AI washing," where firms strategically attribute workforce reductions to AI integration, even when more conventional market pressures or corporate restructuring might be the true drivers. By doing so, companies not only deflect blame but also leverage the mystique and perceived inevitability of AI to justify difficult decisions. Regardless of the immediate cause, Altman conceded that AI is unequivocally empowering "capital"—the owners of businesses and the means of production—to radically erode "worker power." This shift, he suggested, is propelling society from an economy traditionally defined by scarcity into one promising "abundance."
"So that’s, like, a real change to how capitalism has worked," Altman stated, recognizing that the capitalist system, at least in its idealized form, has always relied on a delicate, if often imbalanced, equilibrium between business owners and the workforce. Historically, this balance, however imperfect, has been maintained through various mechanisms, including labor movements, collective bargaining, and regulatory frameworks designed to protect workers. Yet, as Altman pointed out, "if it’s hard in many of our current jobs to outwork a GPU, then that changes." This stark comparison highlights the core challenge: when advanced computational power, embodied in Graphics Processing Units (GPUs) and the AI models they run, can outperform human labor in an increasing number of tasks, the traditional value proposition of human work is fundamentally altered. "If there was an easy consensus answer, we’d have done it by now, so I don’t think anyone knows what to do," he concluded, a statement that, while seemingly humble, serves as a crucial component of his carefully curated narrative.
This seemingly humble admission—that "nobody knows what to do"—is, in itself, a clever piece of strategic "AI-washing." By vocalizing the obvious truth that AI disproportionately benefits the ruling class by undermining worker power, Altman simultaneously frames this outcome as an unavoidable consequence of technological progress and effectively absolves himself and his company of direct responsibility. It presents the disruption as an elemental force of nature, rather than the foreseeable result of design choices and market strategies. This rhetoric sidesteps the uncomfortable reality that powerful figures and corporations do have agency in shaping the future of AI and its societal integration. To claim a lack of solutions, especially from a leader at the forefront of AI development, can be interpreted not as genuine helplessness, but as a tacit endorsement of the status quo—a status quo that currently favors capital accumulation and technological acceleration over equitable distribution and worker protection.
A closer look at OpenAI’s actions under Altman’s leadership reveals a telling disconnect between his rhetorical acknowledgments and the company’s operational priorities. Despite recognizing the precarious position of labor, OpenAI has made virtually no public commitment to worker welfare. On the contrary, critics, including labor federations, have pointed to OpenAI’s perceived opposition to efforts aimed at regulating AI abuses in the workplace. There’s a noticeable absence of advocacy for fundamental labor protections such as sectoral bargaining within the tech industry, or indeed, any other industry likely to be impacted by AI-driven displacement. Sectoral bargaining, which involves negotiations across entire industries rather than just individual companies, is a powerful tool for ensuring fair wages, benefits, and working conditions for a broad swath of workers. OpenAI has also remained silent on the urgent need for cost-of-living reductions or robust social safety nets, such as a universal basic income (UBI), which many economists and futurists propose as essential buffers against widespread AI-induced job losses. While Altman has occasionally mused about UBI, his company has not actively championed it or invested in its implementation. Crucially, there is no indication of worker representation within OpenAI’s governance structure itself, meaning those most directly affected by AI’s disruptive power have no voice in the decisions shaping its development and deployment.
Altman’s concern, therefore, appears largely cosmetic, a rhetorical flourish designed to demonstrate awareness without compelling action. This interpretation is reinforced by his subsequent declarations during the BlackRock appearance, where he reiterated his ultimate goal: to make AI "too cheap to meter." This ambition harks back to the mid-20th-century promise of nuclear power, where electricity would be so abundant and inexpensive that its consumption would not need to be measured. Applying this concept to "intelligence" suggests a future where AI’s capabilities are omnipresent and virtually free. "We want to flood the world with intelligence," Altman proclaimed, "and we want people to just use it for everything."
The vision of a world "flooded with intelligence" is compelling, even utopian, on its surface. It promises unprecedented innovation, problem-solving capabilities for humanity’s grandest challenges, and a new era of productivity. However, without a corresponding, equally ambitious plan to address the capital-labor relationship he so astutely acknowledges, this vision raises profound and unsettling questions. If intelligence becomes a limitless, near-free commodity, and the tools of production are overwhelmingly concentrated in the hands of capital, who truly benefits when this "flood" comes for us all? Will it lead to a truly abundant society where human potential is unleashed, or will it exacerbate existing inequalities, creating a vast underclass of displaced workers and an even more powerful elite?
The implications of "too cheap to meter" AI, uncoupled from robust social and economic safeguards, are stark. It could lead to an unprecedented concentration of wealth and power, where those who own and control the advanced AI infrastructure reap disproportionate rewards, while the vast majority struggle to find meaningful work or maintain a decent standard of living. The potential for a future where a small fraction of humanity flourishes in an AI-powered utopia, while the rest are left to contend with the remnants of a disrupted capitalist system, is a dystopia that leaders like Altman have a unique opportunity—and perhaps a moral obligation—to prevent. His public pronouncements, while insightful in their diagnosis of AI’s disruptive potential, remain hollow without concrete, pro-worker initiatives from OpenAI and the broader tech industry. Until then, his acknowledgments serve more as a public relations strategy than a genuine commitment to a more equitable AI-powered future. The critical juncture is not merely identifying the problem, but actively working towards solutions that ensure the benefits of AI are broadly shared, rather than further entrenching the power of capital at the expense of labor.
More on Sam Altman: Humongous Numbers of People Are Uninstalling ChatGPT as Anti-OpenAI Sentiment Surges

