AI Use Appears to Have a “Boiling Frog” Effect on Human Cognition, New Study Warns
Sign up to see the future, today
Can’t-miss innovations from the bleeding edge of science and tech
In a new, groundbreaking study, a multidisciplinary cohort of researchers from leading institutions across the United States and United Kingdom claims to provide the first causal evidence that a reliance on artificial intelligence to assist with “reasoning-intensive” cognitive labor can rapidly impair users’ intellectual ability and willingness to persist despite difficulty. This provocative finding suggests a concerning trade-off: immediate performance gains from AI assistance may come at a significant, insidious cognitive cost, potentially eroding fundamental human intellectual capabilities over time.
The study delves into a wide spectrum of mental tasks that constitute “reasoning-intensive” cognitive labor, ranging from complex problem-solving in mathematics and coding to creative endeavors like writing, critical analysis, and even the simple act of brainstorming new ideas. It posits that outsourcing these core human cognitive functions to AI tools, even for short durations, initiates a gradual but measurable decline in independent thought and mental fortitude. “We find that AI assistance improves immediate performance, but it comes at a heavy cognitive cost,” the study emphatically declares of its findings. “After just [about] 10 minutes of AI-assisted problem-solving, people who lost access to the AI performed worse and gave up more frequently than those who never used it.” This rapid onset of cognitive decline, observed after a mere ten minutes of engagement, underscores the potent and swift impact AI can have on human mental processes.
While the study has yet to undergo the rigorous scrutiny of peer review, its methodology and conclusions resonate deeply with a growing body of research that increasingly suggests extensive AI use can distort, dampen, and even fundamentally alter users’ thinking patterns and independence. Experts worldwide are working feverishly to understand the real-time impacts of widely-used generative AI chatbots on individuals and society. Their collective warnings paint a stark picture: the habitual outsourcing of cognitive tasks to AI tools could precipitate a “boiling frog” conundrum. In this analogy, the gradual, almost imperceptible erosion of our cognitive “muscles” — the very faculties that enable us to think critically, solve problems creatively, and persevere through intellectual challenges — could lead to formidable and potentially irreversible challenges in the long-term, catching humanity unawares.
The “boiling frog” effect serves as a powerful metaphor for the study’s central concern. “If sustained AI use erodes the motivation and persistence that drive long-term learning, these effects will accumulate over years, and by the time they are visible, they will be difficult to reverse,” the study urges. “This is analogous to the ‘boiling frog’ effect, where each incremental act feels costless, until the cumulative effect becomes overwhelming to address.” This insidious nature of cognitive erosion is perhaps the most alarming aspect of the findings, as individuals may not recognize the detrimental impact until it has become deeply entrenched and challenging to counteract. The ease with which AI can provide solutions might lull users into a false sense of enhanced capability, masking a deeper, more fundamental decline in their inherent cognitive prowess.
To rigorously investigate these hypotheses, the researchers designed a series of experiments. In the initial phase, a cohort of approximately 350 Americans was recruited and tasked with completing a brief series of fraction equations. Crucially, a little more than half of these participants were randomly granted access to a specialized chatbot. This AI assistant, built upon a sophisticated model akin to OpenAI’s GPT-5, was pre-loaded with specific answers for each question on the brief exam, offering direct assistance. The remaining participants were funneled into an AI-free control group, providing a baseline for comparison. The experimental design was meticulous, aiming to isolate the causal impact of AI assistance.
The initial results appeared to validate AI’s utility: the chatbot proved highly expedient, enabling AI-aided participants to breeze through the test with remarkable efficiency. However, a critical intervention occurred midway through the short exam: access to the AI was abruptly cut off for the assisted group. What followed was a stark demonstration of cognitive dependency. Participants who had relied on the AI assistant experienced a rapid and significant decline in their ability to work through reasoning questions independently. More troublingly, their will to persist and tackle challenging problems without AI assistance plummeted, indicating an erosion not just of skill, but of motivation and resilience.
To bolster the robustness of their findings, the researchers conducted a follow-up experiment with a larger group of nearly 670 participants. These individuals were again split into two roughly-equal halves and asked to complete a brief mathematical reasoning test. Once more, one group was provided with a chatbot assistant, only to be suddenly abandoned by their AI companion, leaving them to cognitively fend for themselves. The results unequivocally mirrored those of the first experiment: performance dropped significantly, and participants’ perseverance waned dramatically. This replication across a larger sample size added considerable weight to the initial observations.
The consistency of these outcomes was further tested in a final experiment involving approximately 200 additional participants. This time, the task shifted from mathematical problems to a brief series of reading comprehension questions, a domain equally reliant on “reasoning-intensive” cognitive labor. The results persisted, demonstrating that the observed effects were not merely limited to quantitative tasks but extended to verbal and analytical reasoning. This broad applicability across different cognitive domains highlights the widespread potential impact of AI reliance.
Rachit Dubey, an assistant professor at the University of California, Los Angeles, and a computational cognitive scientist who coauthored the study alongside peers from the Massachusetts Institute of Technology, Carnegie Mellon University, and the University of Oxford, articulated the gravity of these findings in an interview with Futurism. “People’s persistence drops,” Dubey explained. “Once the AI is taken away from people, it’s not that people are just giving wrong answers. They’re also not willing to try without AI.” This suggests a more profound impact than just a temporary dip in performance; it points to a fundamental shift in cognitive approach and a potential learned helplessness.
One notable bright spot emerged from the research, offering a potential pathway for mitigating these negative effects: the manner in which participants utilized AI appeared to significantly influence individual outcomes. Those who self-reported essentially prompting the chatbot to simply “cough up the answers” — a form of passive reception — unsurprisingly had a much worse time once the AI support was withdrawn. In contrast, participants who instead engaged with the chatbot by asking for hints, clarifications, or deeper explanations — a more active, guided learning approach — appeared to be better off when left without AI assistance. This distinction suggests that AI can be a powerful tool for learning if used thoughtfully and interactively, rather than as a mere answer-generating machine. It underscores the importance of integrating AI as a cognitive partner, not a replacement.
Dubey expresses deep concern that an over-reliance on chatbots to entirely replace cognitive labor could lead to a generation that is not only more impatient but also susceptible to an addiction-like dependency on AI. He worries about the psychological ramifications, particularly how AI reliance will transform individuals’ sense of confidence and worth as they struggle to think through problems independently. “The most important thing I learned in college is the value of hard work… if I work hard, I’m capable of doing a lot of things,” Dubey reflected, emphasizing that schools, communities, and policymakers should exercise extreme caution before “blindly” integrating chatbots into educational programs or professional workflows. “These are very important core human elements that we learned throughout our childhood, in high school and college years.”
The broader implications for human development are profound. “If we’re offloading to AI at scale for everything and anything, what will it do to our own beliefs about our own selves?” Dubey continued, articulating a fear that extends beyond mere skill degradation. He added, “practice makes you better in many domains, and that’s what AI will take away from you… that’s what I’m most worried about. We will have a generation of learners and people who will not know what they’re capable of, and then that will really dilute human innovation and creativity.” This erosion of self-efficacy and the intrinsic motivation derived from overcoming challenges could have far-reaching consequences for societal progress and individual fulfillment.
As the researchers seek to expand their research into longer-term experiments to observe these effects over extended periods, they are issuing a compelling challenge to industries, educators, and AI developers alike. They urge everyone to “think about optimizing not just what people can do with AI,” as they write in the study, “but what they can do without it.” This call to action emphasizes the critical need for a balanced approach to AI integration – one that leverages its power for augmentation without undermining the foundational cognitive capacities that define human intelligence, resilience, and creativity. The future, they suggest, depends on our ability to cultivate intellectual strength both with and independently of our increasingly intelligent machines.
More on AI and cognition: College Students Losing Ability to Participate in Class Discussions Due to Offloading Their Thinking to AI

