Professors Say AI Is Destroying Their Students’ Ability to Think
Sign up to see the future, today
Can’t-miss innovations from the bleeding edge of science and tech
In classrooms and lecture halls worldwide, a quiet but profound battle is unfolding. Professors are fighting an uphill battle against the pervasive intrusion of Artificial Intelligence into education, a technological tide that is forcing them to fundamentally rethink how they instruct their students. Many of these students, a generation coming of age in an era of unprecedented digital convenience, have already become inextricably dependent on AI tools, raising alarms about the future of critical thought and intellectual independence.
The sentiment among educators is one of escalating frustration, bordering on despair. “It’s driving so many of us up the wall,” one anonymous professor confided to The Guardian in a recent in-depth report. This piece, which gathered insights from over a dozen humanities professors, paints a stark picture of academic life on the front lines of the AI revolution. The challenge, they argue, extends far beyond mere academic integrity; it delves into the very essence of human cognition and intellectual development.
Dora Zhang, a literature professor at UC Berkeley, articulated this profound shift, stating, “I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential. What is it doing to us as a species?” This question underscores the gravity of the situation, moving the discussion from punitive measures against plagiarism to a deeper philosophical inquiry into the impact of AI on human learning, creativity, and even our collective identity.
However, for many students, particularly those seeking the path of least resistance to an easy ‘A,’ these philosophical inquiries into how AI is fundamentally reshaping our interaction with the world and each other often fall on deaf ears. The allure of instant answers and effortless content generation frequently overshadows concerns about deeper cognitive shifts. Yet, a burgeoning body of research is beginning to illuminate the unsettling ways AI reliance is altering our brains and intellectual capacities.
A disturbing harbinger emerged from a Carnegie Mellon study published in early 2025, which revealed that knowledge workers who consistently utilized and implicitly trusted the accuracy of AI tools experienced a demonstrable decline in their critical thinking skills. This finding is not isolated. An earlier study established a clear link between students who routinely relied on ChatGPT for their assignments and a troubling array of negative academic outcomes, including memory loss, increased procrastination, and an overall worsening of academic performance. Perhaps most compellingly, an MIT study, employing advanced EEG scans on subjects tasked with writing essays both with and without ChatGPT, found that AI users exhibited the lowest levels of cognitive engagement during these tasks. This suggests that while AI may facilitate task completion, it does so by offloading the very cognitive processes essential for deep learning and understanding, effectively bypassing the mental heavy lifting required for genuine intellectual growth.
Working directly with students, most professors, especially those in the humanities, likely didn’t require formal research to confirm these troubling trends. Their daily interactions with pupils offered immediate, visceral evidence. Michael Clune, a literature professor and novelist, expressed his deep concern to The Guardian, lamenting that many students now appear “incapable of reading and analyzing, synthesizing data, all kinds of skills” that are foundational to higher education and critical thought. In a move that has drawn considerable criticism, Clune’s institution, Ohio State University, recently mandated that all students enroll in “AI fluency” courses “across every major,” ostensibly to equip them for a future dominated by the technology.
Clune, however, remains deeply critical of this institutional capitulation. “No one knows what that means,” he told the newspaper, highlighting the vagueness and ill-defined nature of such mandates. He continued, “In my case, as a literature professor, these tools actually seem to mitigate against the educational goals I have for my students.” This underscores a critical tension: while institutions rush to embrace AI, the very tools they promote may be antithetical to the core mission of fostering critical thinking, deep analysis, and independent intellectual inquiry, particularly in disciplines like literature where nuanced interpretation and original thought are paramount.
OSU’s proactive embrace of AI may appear to be an egregious example of succumbing to the powerful influence of Big Tech, but the reality is that the AI industry’s tendrils extend far and wide across the educational landscape. Major players like OpenAI and Microsoft have strategically invested tens of millions of dollars into teachers’ unions, offering extensive training programs designed to integrate their AI systems into daily classroom practice. Beyond direct funding, these companies have also forged numerous partnerships with academic institutions, providing students with free or heavily subsidized access to their sophisticated AI tools. Duke University, following such a partnership with OpenAI, even developed its own branded AI tool, “DukeGPT,” further embedding the technology into its academic ecosystem. On a global scale, xAI founder Elon Musk notably partnered with the government of El Salvador to launch what was heralded as the “world’s first nationwide AI-powered education program,” designed to provide his Grok chatbot to an astonishing one million students across thousands of public schools. These initiatives, while framed as progress, are viewed by many educators with deep suspicion.
“These companies are giving these technological tools away partly because they’re hoping to addict a generation of students,” explained Eric Hayot, a comparative literature professor at Penn State, echoing a sentiment shared by many wary academics. He emphasized the profound shift in his teaching approach: “This is part of every single class I teach now, talking to students about why I’m not using AI, why they shouldn’t use AI.” This proactive counter-narrative from educators highlights a growing resistance against the industry’s marketing strategies and the perceived erosion of traditional learning values.
However, pedagogues are far from taking this assault on traditional learning lying down. A wave of innovative and often foundational pedagogical strategies is emerging as a direct response to the AI challenge. Professors are adapting their teaching and assessment methods to circumvent AI use and force genuine student engagement. Some are now implementing oral interrogations, requiring students to articulate their understanding verbally, a method difficult for AI to fake. Others are mandating handwritten notebooks and journals, ensuring that the process of learning and ideation is visibly documented by the student’s own hand. The faculty-run initiative, AgainstAI, serves as a crucial resource, advising professors on how to navigate and work around AI use by recommending assignments such as oral exams, requiring students to submit photographic evidence of their notes, and maintaining physical paper journals. These methods are not merely deterrents; they are a return to fundamental learning practices that emphasize process, critical thought, and authentic intellectual labor.
Amidst the widespread concern, some educators even dare to express a cautious optimism. Several professors reported noticing a nascent pushback from students themselves, or at least a growing cynicism regarding the utility and implications of AI tools. “I think the current crop of Gen Z students are seeing that they are the guinea pigs in this giant social experiment,” Professor Zhang observed, suggesting an emerging awareness among students about their role in this unfolding technological paradigm. This self-awareness could be a powerful catalyst for change, driving students to question their reliance and seek more meaningful forms of engagement.
Professor Clune, despite his earlier lament, also articulated a call to action. “There’s kind of defeatism, this idea that there’s no stopping technology and resistance is futile, everything will be crushed in its path,” he acknowledged, but quickly countered, “That needs to change… We can decide that we want to be human.” This profound statement encapsulates the core of the struggle: a choice between passive acceptance of technological determinism and an active assertion of human agency, critical thinking, and the enduring value of intellectual effort. The fight in classrooms today is not just about academic integrity; it’s about defining what it means to learn, to think, and ultimately, to be human in an increasingly AI-driven world.
More on AI: New AI Agent Logs Directly Into College Platform Canvas to Do Your Homework for You

