Study Finds That Execs Are Outsourcing Their Thinking to AI, Raising Alarms About Cognitive Atrophy at the Top
The irony is palpable: while headlines often focus on AI’s potential to diminish the cognitive faculties of students or frontline workers, a recent, disquieting study suggests that the very business executives championing and deploying artificial intelligence are themselves increasingly outsourcing their critical thinking and emotional labor to these advanced systems, potentially leading to a profound erosion of their own intellectual capabilities and leadership acumen.
A comprehensive investigation, orchestrated by the market research firm 3Gem in collaboration with Confluent.io, and prominently highlighted by *The Register*, has cast a stark light on the evolving relationship between UK business leaders and their AI chatbots. The study, which meticulously surveyed 200 high-ranking individuals including company owners, founders, and CEOs, revealed that an astonishing 62 percent of respondents are now leveraging AI to inform “most decisions” within their organizations. Even more concerning, 140 of these industry titans confessed to routinely second-guessing their own strategic insights when they diverged from AI’s recommendations. Nearly half, 46 percent, admitted to placing more trust in the counsel of AI than in the collective wisdom of their human colleagues, a paradigm shift that signals a fundamental reorientation of executive decision-making processes.
This emerging trend is not entirely without precedent. A similar report from the previous year indicated that 64 percent of business leaders had consulted AI for advice on sensitive personnel matters, specifically terminations. While the 3Gem survey for 2025 showed a dip to 27 percent for AI’s involvement in such decisions, the overarching pattern remains clear: the individuals most aggressively investing in and advocating for AI, often with seemingly little regard for its broader societal impact on human cognition, are paradoxically becoming its most significant cognitive dependents. This dynamic underscores a critical, often overlooked, aspect of the AI revolution: its profound, and potentially debilitating, effect on the cognitive abilities of those at the helm.
The phenomenon observed among executives echoes findings from earlier research. A joint study by Carnegie Mellon and Microsoft published last year demonstrated that knowledge workers who placed high trust in the accuracy of generative AI systems exhibited a significantly lower propensity for critical thought. The mechanism is straightforward: when humans are confident that a task can be competently automated, there’s a natural inclination to disengage, to “take a backseat,” and allow the system to operate autonomously. This passive reliance can be seen in everyday scenarios, such as the alarming incidents involving self-driving cars where human drivers fail to intervene even when the AI veers off course, leading to accidents. The mental muscle for vigilance and critical assessment atrophies from disuse.
Further solidifying this concern, Søren Dinesen Østergaard, the Danish psychiatrist renowned for predicting the condition now colloquially termed “AI psychosis,” issued a dire warning earlier this February. He cautioned that academic scholars risk accumulating a substantial “cognitive debt” when they delegate their intellectual work to AI chatbots. This “debt” represents the cumulative loss of problem-solving skills, analytical depth, and creative ideation that occurs when a machine performs tasks that would otherwise challenge and strengthen human intellect. The implication for business leaders is stark: by offloading complex strategic decisions, data analysis, and even emotional processing to AI, executives are accumulating a similar, if not greater, cognitive debt, eroding the very faculties that define effective leadership.
The implications of this widespread executive reliance on AI extend far beyond mere convenience. Firstly, it represents a significant erosion of human judgment. Strategic leadership often demands a blend of data-driven analysis, intuition honed by years of experience, and a deep understanding of human psychology and organizational culture. When AI dictates “most decisions,” the unique human elements of empathy, foresight, ethical consideration, and the ability to navigate ambiguous, novel situations are sidelined. Leaders may become adept at executing AI’s directives but lose the capacity for independent, nuanced thought that truly drives long-term vision and resilience.
Secondly, this trend threatens the loss of invaluable tacit knowledge. Decision-making is not solely about processing explicit data points; it involves an intricate web of unwritten rules, contextual understanding, and experiential wisdom that is exceedingly difficult, if not impossible, for current AI models to fully capture or replicate. As executives increasingly defer to AI, their own reservoirs of tacit knowledge may stagnate or diminish, leaving them ill-equipped to handle unforeseen crises or innovate beyond existing paradigms. The “why” behind a decision, often rooted in an executive’s deep understanding of market dynamics, human behavior, or organizational history, risks being replaced by an opaque AI output, leading to the “black box” problem where leaders implement recommendations without truly comprehending the underlying rationale.
This brings forth a critical accountability dilemma. If an AI-driven strategy leads to failure or unintended consequences, where does the responsibility ultimately lie? With the executive who approved the AI’s recommendation, the AI system itself, or the developers who created it? This ambiguity can foster a culture of diminished accountability, as leaders may deflect blame onto the infallible-seeming machine. Furthermore, the reliance on AI could stifle innovation and creativity. True breakthroughs often emerge from challenging assumptions, embracing unconventional ideas, and taking calculated risks – areas where AI, by its very nature, tends to optimize existing patterns rather than generate truly novel concepts. If executives are constantly guided by algorithms trained on past data, the potential for disruptive, forward-thinking strategies could be severely curtailed.
The long-term effects on executive skill atrophy are equally concerning. Just like any muscle, cognitive abilities such as critical analysis, problem-solving, emotional intelligence, and strategic foresight weaken with disuse. Executives who habitually outsource these functions to AI may find themselves increasingly unable to perform them independently, leading to a deskilling of the leadership class. This has profound implications not only for individual career longevity but also for the overall health and adaptability of organizations. What message does it send to employees when their leaders demonstrably trust an algorithm more than their human teams? It risks fostering a climate of distrust, disengagement, and a sense that human contributions are devalued, ultimately harming organizational culture and morale.
On a broader societal scale, if many executives across different industries begin to rely on similar AI models for decision-making, it could lead to a homogenization of strategies, reducing competitive differentiation and potentially making entire sectors vulnerable to similar blind spots or failures. There are also significant ethical quandaries. AI models are trained on vast datasets that often reflect existing societal biases. If executive decisions are increasingly mediated by these potentially biased algorithms, there is a risk of perpetuating or even amplifying systemic inequities in hiring, resource allocation, and market strategies. The future of leadership itself is called into question: is leadership in the AI age about guiding people and articulating a compelling vision, or merely about effectively managing an array of sophisticated algorithms?
The consensus is robust: outsourcing one’s thinking to AI invariably leads to cognitive atrophy. The executives who enthusiastically evangelized the “lobotomy machine” for others, it seems, are not immune to its effects. This situation demands an urgent re-evaluation. AI must be seen as a powerful tool to augment human intelligence, not to replace it. Executives must cultivate a culture of critical engagement with AI outputs, questioning, validating, and integrating them with their own human judgment, intuition, and ethical frameworks. Investing in “AI literacy” for leaders is paramount, ensuring they understand both the capabilities and the limitations of these technologies. Ultimately, the goal should be a synergistic human-AI collaboration where human oversight, critical thinking, and ethical responsibility remain at the absolute core of leadership. Without this conscious effort, the very individuals steering our future may find their own cognitive compasses irrevocably diminished, navigating an increasingly complex world with outsourced minds.
More on AI: Harvard Professor Says AI Users Are Losing Cognitive Abilities

