Harvard astronomer Avi Loeb, renowned for his often-controversial assertions regarding potential extraterrestrial civilizations, has turned his keen observational eye from distant stars to a more immediate, terrestrial phenomenon: the perceived cognitive decline among those excessively reliant on artificial intelligence. While Loeb’s public musings about phenomena like ‘Oumuamua or fragments from the Pacific Ocean possibly being alien technology have certainly raised eyebrows within the scientific community, his latest warning about the human brain’s atrophy in the age of AI chatbots resonates with a growing chorus of concern from educators, psychologists, and tech ethicists worldwide. It’s a stark reminder that even as we marvel at the burgeoning capabilities of AI, the human element—our very capacity for independent thought—might be silently eroding.
In a recent essay published on his personal blog, Loeb articulated his apprehension, stating, “Recently, I noticed that some people around me are starting to lose their cognitive abilities as a result of excessive use of Artificial Intelligence (AI) platforms, such as ChatGPT, Claude or Gemini.” He drew a vivid, if slightly imperfect, analogy: “This phenomenon resembles muscle loss from excessive use of public transportation as a substitute for walking.” While studies might show public transit users often walk a considerable distance as part of their commute, and a comparison to drivers might be more apt for sedentary behavior, Loeb’s underlying message is clear: outsourcing mental effort leads to mental weakening. He further underscored the challenge this presents in academia, lamenting, “In academia, the only reliable way of testing the cognitive abilities of students right now is by placing them in a Faraday cage,” implying that only in an environment devoid of digital assistance can one truly gauge a student’s unassisted intellect.
Loeb’s observations, though anecdotal for him, tap into a tangible and escalating concern voiced by a broad spectrum of researchers and educators since the rapid proliferation of AI chatbots like ChatGPT over the past few years. The advent of these powerful linguistic models has been met with a mix of awe and trepidation, and increasingly, empirical evidence is supporting the latter. Numerous research papers, a wealth of anecdotal accounts from classrooms and workplaces, and grim predictions from cognitive scientists collectively outline this exact phenomenon: a "cognitive cost" associated with frequent AI tool usage.
One such significant finding comes from a 2025 study by Swiss researcher Michael Gerlich, which posits that frequent engagement with AI tools can indeed cause critical thinking abilities to atrophy. Gerlich’s research highlights how the convenience of instantaneous answers provided by AI bypasses the brain’s natural processes of problem-solving, analysis, synthesis, and evaluation. When AI readily generates solutions, explanations, or creative content, the user’s brain is less compelled to exert the effort required for these complex cognitive functions. Over time, this disuse can lead to a demonstrable weakening of neural pathways associated with independent thought and reasoning. For instance, instead of wrestling with a complex ethical dilemma, weighing multiple perspectives, and constructing a nuanced argument, a user might simply prompt an AI for a summary, thereby bypassing the crucial intellectual struggle that fosters deeper understanding and critical insight.
The long-term risks of this systemic intellectual debt only become more pronounced as the number of AI users continues to climb globally. A particularly alarming insight was revealed by recent research from the Pew Research Center, which found a staggering proportion of school-aged teens are routinely employing AI to complete their homework. What’s even more concerning is that heavy AI use in educational settings appears to be concentrated among minority and low-income students. This trend raises profound questions about educational equity and the potential for widening academic disparities. If certain demographics are disproportionately relying on AI as a substitute for genuine learning and skill development, they risk entering higher education and the workforce with underdeveloped critical thinking, research, and writing skills, perpetuating cycles of disadvantage rather than breaking them. The implications for future innovation, scientific discovery, and societal progress are immense if a generation grows up more adept at prompting AI than at independent ideation and rigorous intellectual pursuit.
Beyond critical thinking, other cognitive functions are also susceptible to decay. Memory, for example, might be subtly impacted. When external tools can instantly retrieve any piece of information, the brain’s internal retrieval mechanisms, and the associated benefits of encoding and consolidating memories, may become less robust. Similarly, creativity, often seen as a uniquely human trait, could be stifled if users habitually turn to AI for generating ideas, plots, or artistic concepts, rather than cultivating their own imaginative faculties through sustained effort and divergent thinking. The iterative process of trial and error, of grappling with ambiguity and uncertainty, is often where true creativity flourishes, and AI’s capacity for rapid, often predictable, output might inadvertently short-circuit this vital human process.
Loeb’s perspective is not merely a critique of AI’s functional limitations but a deeper philosophical statement about the essence of intelligence. He firmly rejects the notion that current AI systems are analogous to the human mind, declaring, “Regarding AI as similar to the beauty of the human mind is just like putting lipstick on a pig.” This vivid metaphor encapsulates his belief that while AI can mimic human intelligence in remarkable ways, it lacks the consciousness, intuition, subjective experience, and perhaps most importantly, the capacity for genuine wonder and original thought that defines human cognition. For Loeb, the true frontier of intelligence lies not in refining algorithms to simulate existing knowledge, but in the potential discovery of truly novel forms of consciousness.
Indeed, his primary scientific passion remains the search for "truly alien intelligence from another star." This pursuit highlights a fundamental distinction: current AI operates within the parameters of human-programmed logic and data, essentially a sophisticated reflection of human intelligence. An alien intelligence, by contrast, represents the possibility of a completely independent evolutionary path to consciousness, potentially operating on principles entirely unknown to us. It is this prospect of genuine, fundamentally different intelligence that excites Loeb, underscoring his view that while AI is a powerful tool, it is not a replacement for the inherent and evolving complexity of the human mind, nor for the breathtaking possibility of discovering intelligence from beyond our planet.
The concerns raised by Avi Loeb and supported by emerging research compel us to consider not just the capabilities of AI, but its profound impact on human development. The challenge lies in fostering a symbiotic relationship with AI, where it serves as an amplifier of human intellect rather than a substitute. This requires a concerted effort in education to teach digital literacy, critical evaluation of AI outputs, and the responsible integration of these tools. It means encouraging intellectual curiosity, the painstaking process of independent inquiry, and the celebration of human ingenuity. As AI continues its relentless advance, safeguarding our cognitive abilities, our capacity for critical thought, and our unique human spirit of inquiry becomes an imperative, ensuring that we remain the architects of our future, not just passive recipients of AI-generated realities.

