In May 2024, Google, a titan of information technology, made a bold and controversial move by rolling out its AI Overviews feature, a generative artificial intelligence integration designed to revolutionize search results by providing immediate, synthesized answers. The stated ambition was to make information “easier to find,” streamlining the user experience by eliminating the need to click through multiple links. However, this ambitious leap into AI-driven search quickly encountered significant turbulence, as the feature’s initial performance was marred by a series of widely publicized “hallucinations” that cast a long shadow over its reliability and sparked widespread public concern.

The early days of AI Overviews were characterized by a slew of bewildering and often comical inaccuracies that quickly went viral. Users were advised to engage in bizarre and potentially harmful activities, such as “eating rocks” to improve health or “putting glue on their pizzas” to prevent cheese from sliding off. These absurd recommendations, while seemingly innocuous on the surface, served as a stark, early illustration of the persistent and fundamental issues that plague large language model (LLM)-based tools. They highlighted the inherent challenge of AI models, which are designed to predict plausible sequences of words based on their training data, rather than possessing genuine understanding or common sense. Other, less dangerous but equally frustrating gaffes included the AI’s inability to reliably identify the current year or its tendency to fabricate elaborate explanations for nonexistent idioms, further demonstrating a disconnect from factual accuracy and real-world knowledge. While these initial missteps might have been dismissed as innocent gaffes, leading at most to user frustration or a good laugh, the underlying problem of AI hallucination has now escalated to a far more critical and potentially life-threatening level.

A new, deeply concerning investigation by The Guardian has revealed that Google’s AI Overviews is not merely making up recipes or misstating facts; it is dispensing inaccurate and potentially dangerous health information. The report, published in early 2026, laid bare a perilous landscape where AI-powered summaries, intended to offer quick answers, are instead loaded with misleading medical advice that could put individuals at severe risk. Experts interviewed by the newspaper issued grave warnings, suggesting it is only a matter of time before this bad advice endangers users, or, in a worst-case scenario, contributes directly to someone’s death. This shift from humorous blunders to serious public health threats marks a critical turning point in the ongoing debate about AI’s role in disseminating information.

The severity of the issue cannot be overstated, as evidenced by specific examples uncovered by The Guardian. In one alarming instance, AI Overviews advised individuals suffering from pancreatic cancer to avoid high-fat foods. This recommendation stands in direct opposition to established medical guidelines, which often suggest a high-fat diet for pancreatic cancer patients to help manage malabsorption and maintain nutritional intake, particularly when pancreatic enzyme supplements are prescribed. Such erroneous advice could lead vulnerable patients to make dietary choices that exacerbate their condition, hinder treatment effectiveness, and severely impact their quality of life. Furthermore, the AI tool completely bungled information related to women’s cancer tests, providing summaries that were either incorrect or incomplete. Accurate and timely information about cancer screenings, symptoms, and diagnostic procedures is paramount for early detection and successful treatment outcomes. Misleading advice in this critical area could lead individuals to overlook real symptoms, delay necessary screenings, or misinterpret test results, with potentially fatal consequences. These examples underscore a profound failure in the AI’s ability to process and synthesize sensitive medical information accurately, especially when human lives hang in the balance.

The situation is made even more precarious by the prevailing human tendency to turn to the internet for self-diagnosis and answers during moments of worry and crisis. In an age where healthcare access can be challenging and waiting times long, the allure of instant online answers is strong. Stephanie Parker, director of digital at the end-of-life charity Marie Curie, articulated this concern to The Guardian, stating, “People turn to the internet in moments of worry and crisis. If the information they receive is inaccurate or out of context, it can seriously harm their health.” This sentiment highlights the ethical imperative for information providers, especially those leveraging powerful AI, to ensure absolute accuracy in health-related queries. The trust placed in Google’s search results, built over decades, makes its AI’s inaccuracies particularly insidious.

Beyond the outright false information, experts also expressed alarm over the AI Overviews feature generating completely different responses to identical prompts. This inconsistency, a well-documented shortcoming of large language model-based tools, can lead to profound confusion and further erode user trust. When a user asks the same health question multiple times and receives varying answers, it creates a labyrinth of uncertainty, making it impossible to discern reliable information from unreliable. Stephen Buckle, head of information at the mental health charity Mind, shared his dismay with the newspaper, detailing instances where AI Overviews offered “very dangerous advice” concerning eating disorders and psychosis. He found these summaries to be “incorrect, harmful or could lead people to avoid seeking help,” a devastating outcome for individuals already grappling with complex and sensitive mental health challenges. The probabilistic nature of LLMs, which allows for varied outputs, becomes a serious liability when precision and consistency are critical for safety.

In response to The Guardian‘s findings, a Google spokesperson issued a statement, asserting that the tech giant “invests significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information.” While Google’s commitment to quality is crucial, the existence of a minority of inaccurate or dangerous responses, especially in the realm of health, remains unacceptable. The “vast majority” argument falls short when even a single instance of erroneous medical advice could have severe, irreversible consequences. This highlights the immense responsibility placed on Google to not only train its models on vast datasets but also to implement robust guardrails, rigorous fact-checking mechanisms, and continuous human oversight, particularly for domains as sensitive as health. The current results of the investigation clearly indicate that the company has a substantial amount of work ahead to ensure its AI tool ceases to dispense dangerous health misinformation.

The risks associated with AI-generated health advice are poised to grow exponentially, fueled by a concerning level of public trust in these emerging technologies. An April 2025 survey conducted by the University of Pennsylvania’s Annenberg Public Policy Center painted a clear picture of this burgeoning reliance. The study found that nearly eight in ten adults in the U.S. were likely to go online for answers about health symptoms and conditions. More alarmingly, almost two-thirds of these individuals perceived AI-generated results to be “somewhat or very reliable,” indicating a considerable – and troubling – level of trust that outpaces the technology’s current capabilities. This widespread belief in AI’s reliability creates a fertile ground for misinformation to take root, making it incredibly difficult for individuals to discern accurate medical guidance from AI-generated fabrications. Interestingly, the same survey found that just under half of respondents expressed discomfort with healthcare providers using AI to make decisions about their care, suggesting a nuanced but inconsistent view of AI’s role in health.

Further compounding these concerns, a separate MIT study published around May 2025 provided even more sobering insights into user behavior. The research revealed that participants not only deemed low-accuracy AI-generated responses as “valid, trustworthy, and complete/satisfactory,” but they also “indicated a high tendency to follow the potentially harmful medical advice and incorrectly seek unnecessary medical attention as a result of the response provided.” This finding is particularly alarming as it demonstrates a psychological susceptibility to AI’s authoritative tone, regardless of the factual accuracy of its output. People are not just trusting AI; they are actively willing to act on its advice, even when that advice is flawed or dangerous. This behavioral pattern highlights a significant public health challenge, as individuals may unwittingly expose themselves to harm by prioritizing AI over proven medical expertise.

This escalating reliance on AI for health information occurs despite numerous studies and real-world incidents consistently proving AI models to be strikingly poor replacements for human medical professionals. The nuance, empathy, critical thinking, and ethical considerations inherent in medical practice are currently beyond the grasp of even the most advanced AI. Consequently, doctors and other licensed healthcare providers are left with the daunting and increasingly difficult task of dispelling myths, correcting misinformation, and trying to keep patients from being led down wrong and dangerous paths by hallucinating AI. The clinical setting is increasingly becoming a battleground against AI-generated falsehoods, consuming valuable time and resources that could otherwise be dedicated to direct patient care.

Professional medical organizations worldwide are sounding the alarm. On its website, the Canadian Medical Association (CMA) unequivocally labels AI-generated health advice as “dangerous.” The CMA points out that the inherent flaws of AI, including hallucinations, algorithmic biases, and the potential for outdated facts, can “mislead you and potentially harm your health” if users choose to follow the generated advice. They emphasize that AI lacks the contextual understanding, the ability to interpret individual symptoms, or the ethical framework necessary for responsible medical guidance. Experts across the board continue to strongly advise people to consult human doctors and other licensed healthcare professionals instead of relying on AI for medical advice. However, this is a tragically tall ask given the many systemic barriers to adequate healthcare around the world, including long wait times, high costs, and geographical limitations. This unfortunate reality creates a vicious cycle, pushing vulnerable populations towards readily accessible, yet potentially dangerous, AI solutions.

In a final, ironic twist, AI Overviews sometimes appears to possess a flicker of self-awareness regarding its own profound shortcomings. When queried directly on whether it should be trusted for health advice, the feature paradoxically pointed us to The Guardian’s very investigation that exposed its flaws. “A Guardian investigation has found that Google’s AI Overviews have displayed false and misleading health information that could put people at risk of harm,” read the AI Overviews’ reply. This meta-commentary, while perhaps an unintended consequence of its training data reflecting current news, underscores the critical chasm between AI’s potential and its current reality in sensitive domains. It serves as a stark reminder that even the AI itself, in a convoluted manner, acknowledges the significant and dangerous problems it presents. The ongoing saga of Google’s AI Overviews is a powerful testament to the urgent need for greater caution, transparency, and accountability as AI increasingly infiltrates critical aspects of human life. The stakes, particularly in health, are simply too high to get it wrong.

More on AI Overviews: Google’s AI Summaries Are Destroying the Lives of Recipe Developers