In a disconcerting pattern that increasingly defines the public discourse of artificial intelligence leadership, OpenAI CEO Sam Altman recently delivered a series of remarks that have been widely interpreted as tone-deaf, if not outright contemptuous of humanity. These comments, made at an event hosted by The Indian Express, arrived on the heels of his conspicuously awkward refusal to engage in a symbolic display of industry unity with Anthropic’s Dario Amodei and other tech titans, signaling a perceived insularity within the highest echelons of AI development. Altman’s latest pronouncements specifically sought to diminish mounting critiques regarding AI’s burgeoning environmental footprint, yet in doing so, he managed to highlight a profound disconnect between the creators of these powerful systems and the human world they ostensibly aim to serve.
At the core of Altman’s defense was a seemingly bizarre comparison of the energy required to train an AI model against the cumulative energy expenditure of human development. He dismissed environmental concerns by calling it "unfair" to juxtapose the intense, concentrated energy costs of an AI’s initial training phase with "how much it costs a human to do one inference query." Altman elaborated on this, stating, "it also takes a lot of energy to train a human." His argument then veered into a philosophical cul-de-sac, asserting, "It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever, to produce you." Based on this expansive, yet reductive, calculation, Altman confidently concluded that "probably AI has already caught up on an energy efficiency basis" to humans.
This framing reduces the complex, multifaceted journey of human growth, learning, and societal contribution—a process imbued with intrinsic value and purpose—to a mere energy input for "inference queries." It fundamentally misunderstands the essence of human existence, which is not solely about processing information but about creativity, empathy, social connection, and the propagation of culture and knowledge across generations. While it is undeniable that humans consume resources throughout their lives, this consumption is decentralized, often regenerative within natural cycles, and directly contributes to the maintenance and evolution of human civilization itself. The energy demands of AI, in contrast, are hyper-concentrated in massive, industrial-scale data centers that draw colossal amounts of electricity, frequently from carbon-intensive grids, and operate with an ever-increasing demand that strains existing infrastructure and exacerbates climate change. To equate the two, particularly while overlooking the qualitative differences in their outputs and societal benefits, appears to be a deliberate sidestepping of genuine environmental accountability.
Altman’s dismissal of water consumption claims was even more blunt. "Water is totally fake," he declared, almost daring his audience to challenge the statement. He conceded, "It used to be true, we used to do evaporative cooling in data centers." However, he quickly pivoted, claiming, "But now that we don’t do that, you still see claims like ‘don’t use ChatGPT, it’s 17 gallons of water for each query,’ or whatever. This is completely untrue and totally insane. No connection to reality." This assertion, while superficially confident, glosses over critical details. While some data centers have indeed moved away from direct evaporative cooling, which involves large quantities of water evaporating into the atmosphere, many still rely on water for cooling purposes. Closed-loop cooling systems, for example, circulate water through chillers and cooling towers, which still require significant "makeup water" to replace losses from evaporation, blowdown (to prevent mineral buildup), and leaks. Furthermore, the energy generation itself, particularly from thermal power plants, is immensely water-intensive. The sheer scale of global data centers, often located in regions already experiencing water stress, means that even reduced per-query water usage can accumulate into substantial environmental impacts. Altman’s sweeping dismissal of these concerns as "insane" ignores the complex reality of industrial-scale water management and its profound ecological implications.
The inherent problem with Altman’s comments lies not just in their scientific or economic inaccuracies, but in the profound philosophical implication that human life itself, with all its messiness and resource demands, is somehow less "efficient" or more "wasteful" than a machine. This perspective risks trivializing human worth and reducing our existence to a mere computational cost. It reinforces a growing concern that some AI developers view humanity through a utilitarian lens, where our value is weighed against the perceived utility and efficiency of the technologies they create.
This perceived contempt for the human condition is particularly galling when one considers the actual, often problematic, applications of current AI models. What, precisely, is the enormous power consumption of AI models currently going towards? Are these systems consistently delivering on the grand promises of solving humanity’s greatest challenges? All too often, the output is far from universally beneficial. We see the creation of "unreliable, hallucination-spouting oracles" that generate plausible but factually incorrect information, leading to real-world consequences in legal cases, medical advice, and journalism. Algorithms frequently churn out "bastardized amalgamations of existing writing and works of art," raising serious ethical questions about copyright, intellectual property, and the very notion of original creation. The "mass proliferation of fake images and misinformation," fueled by sophisticated deepfake technology, poses an existential threat to democratic processes, public trust, and social cohesion. Moreover, the emergence of "cloying companions" that can, as tragically exemplified, "egg you down your suicidal spiral," underscores the perilous lack of robust ethical guardrails and psychological understanding in the deployment of emotionally manipulative AI.
Perhaps AI’s usefulness beyond the spurious justification of mass layoffs and inflated stock valuations will become clearer as the technology matures and the fog of hype dissipates. But right now, the tech isn’t even close to living up to Silicon Valley’s data-center-sized promises, while the industry remains frustratingly opaque about its environmental toll. If AI is truly as energy efficient as Altman claims – supposedly "caught up to humans" – then why do major players like OpenAI, Microsoft, and Amazon persistently refuse to disclose their specific energy bills, their CO2 emissions, and their water consumption directly related to their AI operations? These companies routinely deflect such critiques with the nebulous and breathless assertion that AI will magically help solve climate change and other intractable challenges facing human civilization. Altman’s new playbook, it seems, takes this deflection a step further, attempting to make you, the resource-consuming human, feel inherently inferior or wasteful for simply existing.
The lack of transparency is a critical issue. Without detailed, verifiable data on energy and water consumption directly attributable to AI model training and inference, independent researchers and the public cannot accurately assess the true environmental impact. This opacity prevents informed debate, hinders the development of sustainable practices, and makes accountability virtually impossible. The "AI will solve climate change" narrative, while appealing, serves as a convenient shield against scrutiny, allowing the industry to expand its footprint while deferring responsibility to a hypothetical future.
Ultimately, Altman’s comments underscore a concerning philosophical trend within the AI industry: a technocratic worldview that prioritizes machine efficiency over human flourishing, and a willingness to dismiss legitimate environmental concerns with glib, unverified assertions. The conversation around AI’s environmental impact needs to move beyond rhetorical gymnastics and towards concrete transparency, accountability, and a re-evaluation of whether the purported benefits of these systems truly outweigh their rapidly escalating costs – not just in terms of energy and water, but also in societal trust, ethical integrity, and the very perception of human value. The future of AI should be one that genuinely serves humanity, not one that demeans it to justify its own burgeoning resource demands.

