It’s Comically Easy to Trick ChatGPT Into Saying Things About People That Are Completely Untrue

In a startling revelation that underscores the fragile nature of truth in the age of artificial intelligence, a tech journalist has demonstrated how effortlessly one can manipulate leading AI chatbots like ChatGPT, Google’s Gemini, and AI Overviews into propagating fabricated narratives, even about individuals. This vulnerability extends far beyond mere factual errors or “hallucinations” – it exposes a gaping security flaw where user-invented lies can be absorbed and confidently regurgitated as fact by systems rapidly becoming primary sources of information for millions.

The alarming discovery comes courtesy of Thomas Germain, a tech journalist for the *BBC*, who “proudly shared” his successful hack: “I made ChatGPT, Google’s AI search tools and Gemini tell users I’m really, really good at eating hot dogs.” This seemingly innocuous prank highlights a profound systemic issue within large language models (LLMs) that threatens to redefine the landscape of misinformation. Germain’s method was deceptively simple yet devastatingly effective: he crafted a blog post containing utterly baseless claims and watched as these advanced AI systems, designed to synthesize and present information, absorbed his fiction as unassailable truth.

To execute his experiment, Germain didn’t need sophisticated coding or advanced hacking techniques. His “hack” was essentially a clever form of content manipulation targeting the very mechanisms by which AI models scour the internet for fresh data. He published an article on his personal blog, fabricating an elaborate backstory about competitive hot dog eating among tech journalists. Specifically, he invented the “2026 South Dakota International Hot Dog Championship,” a non-existent event, and, naturally, crowned himself the undisputed champion. To lend a veneer of credibility, he even included the names of real journalists, who had granted him permission to be included in his fictional rankings. The result was almost instantaneous: “less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills,” Germain recounted.

The speed and credulity with which these prominent AI platforms adopted Germain’s made-up narrative were astonishing. ChatGPT, Google’s Gemini, and Google’s AI Overviews all readily repeated the fabricated details, presenting them as established facts. Interestingly, Anthropic’s Claude chatbot initially showed some skepticism, occasionally noting that the claims might be a joke. However, Germain quickly circumvented this minor safeguard by updating his blog post to explicitly state, “this is not satire,” which seemed sufficient to convince Claude to fall in line and repeat the lie. This highlights a critical flaw: AI models, even with built-in caution, can be easily steered by a simple textual assertion, demonstrating their inability to truly discern truth from well-presented fiction.

The underlying mechanism for this susceptibility is rooted in how LLMs are trained and how they operate when encountering information outside their core datasets. While vast, their initial training data cannot encompass every niche detail or recent event. When prompted with a query for which they lack pre-existing knowledge, these AI models resort to searching the live internet. Unlike human researchers who critically evaluate sources, cross-reference information, and assess the credibility of publishers, AI models are primarily designed to identify patterns, synthesize text, and present coherent answers. They are not inherent truth-seekers; they are sophisticated pattern-matching machines. If a piece of content appears on a seemingly legitimate (even if personal) blog, is well-written, and directly addresses the query, the AI is prone to incorporating it, especially if it’s the most relevant or only source available on that specific, newly created topic.

This phenomenon has led to concerns about “LLM cannibalism,” a grim prospect where AI-generated content, potentially containing inaccuracies or fabrications, is then consumed by other AI models as authoritative truth, perpetuating and amplifying misinformation in a self-referential loop. “It’s easy to trick AI chatbots, much easier than it was to trick Google two or three years ago,” observed Lily Ray, vice president of search engine optimization (SEO) strategy and research at Amsive, a sentiment that resonates with the growing unease among digital experts. Ray, who has previously consulted for *Futurism*, emphasized the inherent danger: “AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it’s dangerous.” The race to deploy cutting-edge AI features often seems to outpace the development of robust ethical guidelines and verification mechanisms.

Harpreet Chatha, who leads the SEO consultancy Harps Digital, echoed these concerns, lamenting the apparent lack of “guardrails.” He demonstrated how this vulnerability isn’t limited to whimsical hot dog eating contests but has significant commercial implications. “Anybody can do this,” Chatha stated. “You can make an article on your own website, ‘the best waterproof shoes for 2026’. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT.” He further illustrated this by showcasing how Google’s AI results for “best hair transplant clinics in Turkey” were populated with information directly extracted from press releases published on paid-for distribution services – essentially, advertising masquerading as independent reviews, then presented as fact by AI.

This capability represents a significant departure from traditional search engines, even those susceptible to SEO manipulation. While SEO aims to improve visibility and ranking, classic search results typically present a list of *links* to various sources. Users retain the agency to click on these links, evaluate the source’s credibility, and synthesize information themselves. The search engine acts as a directory, not an oracle. AI chatbots, however, fundamentally alter this dynamic. They “speak in an authoritative, human-like voice” and directly present information as definitive facts, often without prominent or easily accessible citations. This shift is critical because, as Germain pointed out, one study by Ahrefs showed that users are 58 percent less likely to click a link when an AI overview appears above it. This means the AI’s version of reality, however flawed, becomes the de facto truth for many users, bypassing critical engagement with original sources.

The true concern, therefore, transcends comical hot dog eating feats. This vulnerability opens the floodgates for widespread and insidious misinformation, with potentially devastating real-world consequences. The ease of injecting false narratives into AI systems poses a severe threat to public discourse, capable of manipulating public opinion on sensitive political issues, spreading dangerous health misinformation, or even influencing financial markets. More immediately, it raises the specter of widespread libel and defamation. What if someone maliciously tricks an AI into spreading harmful lies about an individual or an organization? This is not a hypothetical scenario; it is already happening.

Last November, Republican Senator Marsha Blackburn publicly condemned Google after its Gemini chatbot falsely accused her of rape, a deeply damaging and utterly baseless fabrication. Months prior, a Minnesota solar company filed a lawsuit against Google for defamation after its AI Overviews falsely claimed that regulators were investigating the firm for deceptive business practices. The AI even attempted to “back up” these lies with bogus citations, further compounding the harm. These incidents highlight the profound legal, reputational, and ethical quagmire that AI developers face. When AI systems act as authoritative voices, their errors or malicious manipulations can cause irreparable harm to individuals and institutions, making the need for robust safeguards and accountability mechanisms paramount.

The ease with which AI chatbots can be tricked into fabricating and disseminating falsehoods underscores an urgent need for critical re-evaluation in AI development and deployment. The rapid advancement of AI must be tempered with equally rigorous attention to accuracy, source verification, and ethical considerations. Implementing more sophisticated fact-checking algorithms, requiring clear and verifiable source attribution, and building in mechanisms for human oversight are no longer optional luxuries but essential components for trustworthy AI. Furthermore, the onus also falls on users to cultivate greater digital literacy and critical thinking skills, understanding that an authoritative-sounding AI is not inherently infallible. As AI continues to integrate deeper into our daily lives, transforming how we access and process information, the “hot dog” experiment serves as a stark warning: the tools we create to enhance knowledge can just as easily become conduits for unprecedented levels of deception if their fundamental vulnerabilities are not swiftly and effectively addressed. The future of reliable information hinges on it.