Study: New York Times Has Published Extensive AI-Generated Articles
The landscape of modern journalism is undergoing a profound transformation, with artificial intelligence emerging as a pervasive, albeit often invisible, force. The odds that esteemed institutions like *The New York Times* and other major news outlets have inadvertently—or perhaps, even knowingly—published AI-generated articles now appear exceptionally high, stirring significant debate and concern within the media industry and among the public.
Speculation around this possibility reached a fever pitch earlier this week, drawing attention to a “Modern Love” column featured in *The New York Times* last November. The controversy ignited when Becky Tuch, editor of *Lit Mag News*, posted an excerpt of the piece on X, delivering a provocative assessment: “this reads EXACTLY like AI slop,” she declared, sparking widespread discussion about the tell-tale signs of algorithmic authorship.
As it turns out, Tuch’s astute observation was not unfounded. A subsequent investigation detailed in a revealing new piece by *The Atlantic* brought to light compelling evidence that her suspicions were indeed onto something significant. The article confirmed that the author of the “Modern Love” column, Kate Gilgan, while denying direct copy-pasting from an AI model, openly admitted to having “utilized AI as a tool.” Gilgan specified her use of various advanced chatbots, including ChatGPT, Claude, and Gemini, for “inspiration and guidance and correction” during her writing process.
Gilgan firmly maintained her stance, asserting, “I used AI as a collaborative editor and not as a content generator.” However, in the current epoch of the AI boom, where the influence of these powerful tools on human users is frequently far more extensive and subtle than often perceived, such a distinction feels increasingly tenuous. Research indicates that prolonged interaction with AI can profoundly shape a user’s thought processes, writing style, and even creative impulses. The continuous consultation of a chatbot, even for mere “guidance,” makes it almost inevitable that its distinctive style, linguistic patterns, and structural preferences could subtly, yet significantly, “rub off” on the human author, blurring the lines between human creativity and algorithmic influence.
The potential scale of AI’s infiltration into journalism may be far more imposing than initial anecdotal evidence suggests. The public debate ignited by controversies such as Gilgan’s column spurred a group of AI researchers to undertake a rigorous investigation into the prevalence of AI-generated material within American newspapers. Leveraging an advanced AI-detection tool developed by the startup Pangram Labs, their comprehensive findings, published as a preprint study in October, have raised considerable alarm across the media landscape.
The study’s conclusions are stark: approximately nine percent of newly-published articles were found to be either partially or fully AI-generated. This phenomenon was particularly pronounced in smaller, local outlets, where resource constraints might make AI assistance an attractive, cost-effective solution for content generation. However, the researchers’ focus on the “newspapers of record”—a category encompassing prestigious publications like *The New York Times*, *The Wall Street Journal*, and *The Washington Post*—yielded an even more unsettling discovery. Opinion pieces within these major outlets were found to be over six times more likely to contain AI-generated content compared to articles originating directly from their dedicated newsrooms.
It is crucial to interject a disclaimer regarding AI detection technologies. Many AI detectors, particularly freely available ones, have garnered a notorious reputation for unreliability. A widely circulated screenshot of an AI detector erroneously flagging a passage from Mary Shelley’s classic “Frankenstein” as “100 percent AI generated” recently went viral, drawing considerable mockery and highlighting the inherent flaws and limitations of these tools. Such false accusations occur with alarming frequency, underscoring the challenges in accurately distinguishing human creativity from sophisticated algorithmic mimicry.
Nonetheless, Pangram Labs’ tool stands out as an exception. It is generally regarded as among the more reliable options available, a sentiment consistently supported by head-to-head performance tests against other detection systems. This enhanced credibility is further bolstered by the study’s specific findings. The fact that the AI detector predominantly identified AI-generated content within opinion pieces, rather than core news articles, lends significant weight to its accuracy. Opinion pieces are often penned by external contributors who may not adhere to the same stringent journalistic processes or be subject to the same level of editorial oversight as professional staff journalists. This environment—where a more personal voice and less direct factual reporting are expected—makes it a more plausible entry point for AI-assisted or generated content, thereby reinforcing the detector’s findings. The undeniable reality of AI-generated “dreck” polluting scientific journals further begs the question: why should news outlets, striving for truth and accuracy, be exempt from such infiltration?
The increasing entanglement of news organizations with AI companies signals a complex and potentially perilous path forward. *The Washington Post*, for instance, has launched an AI-generated podcast feature that provides concise summaries of its latest stories and has introduced a chatbot to field reader inquiries. *The New York Times* is utilizing AI to generate headlines, a task once considered the exclusive domain of skilled human editors, influencing click-through rates and reader engagement. *Bloomberg* similarly offers AI-generated summaries of its extensive articles, streamlining information consumption for its busy readership. Perhaps most tellingly, a senior manager at *The Associated Press* recently communicated to staffers that “resistance” to AI was “futile,” signaling a pragmatic, albeit perhaps resigned, acceptance of AI’s inevitable integration into journalistic workflows.
However, allowing these powerful tools to permeate newsrooms unchecked could prove to be a dangerous slippery slope. The potential for errors, misinformation, and the erosion of journalistic integrity is substantial. A stark illustration of this danger occurred last month when a senior *Ars Technica* reporter faced termination after being caught accidentally using AI-fabricated quotes in an article, leading to a humiliating retraction for the publication. The reporter’s defense—that he did not use AI to write the article itself, but rather employed a chatbot to summarize his notes—highlights a critical vulnerability. In the process of summarization, the AI “hallucinated” a quote, which the reporter inadvertently included, believing it to be genuine. This incident underscores that even when used as a seemingly benign “tool” for efficiency, AI can introduce profound inaccuracies, directly jeopardizing the foundational trust between news outlets and their readers.
The implications of these developments are far-reaching. As AI becomes more sophisticated, distinguishing between human-crafted content and machine-generated text will become increasingly challenging for both editors and readers. This necessitates a robust re-evaluation of ethical guidelines within journalism, demanding greater transparency from news organizations about their AI usage, and fostering a heightened sense of critical media literacy among the public. The study’s findings serve as a potent reminder that while AI offers undeniable efficiencies, its integration into the bedrock of public information requires utmost caution and a steadfast commitment to preserving the integrity, authenticity, and human touch that define credible journalism.

