Grammarly Ignites Academic Outcry with “Necromantic” AI Feature Impersonating Deceased Scholars for Manuscript Reviews.

The esteemed digital writing assistant, Grammarly, finds itself at the epicenter of a swirling controversy within academic circles, facing severe accusations of “necromancy” following the discovery of a chilling new feature that permits users to solicit manuscript reviews from AI-generated personas of real professors—a disturbing number of whom have already departed this mortal coil. This unprecedented move, which blurs the lines between digital assistance and the posthumous appropriation of intellectual identity, has sparked a widespread and fervent backlash, raising profound ethical, legal, and moral questions about the future of artificial intelligence in education and research. The issue initially surfaced with startling clarity on Sunday, when Verena Krebs, a prominent medieval historian and professor at Ruhr-University Bochum, shared a screenshot that quickly went viral across professional networks. Her image revealed Grammarly’s “Expert Review” tool, offering users the option to select renowned historian David Abulafia as one of the available “experts” to provide feedback on their scholarly papers. The profound irony, and indeed the deeply unsettling nature of this offering, lies in the fact that Professor Abulafia, a distinguished figure in medieval studies, passed away in January of this very year. His unwitting posthumous enlistment by an AI tool immediately ignited a firestorm of responses, reverberating with particular intensity through academic communities worldwide.

The revelation sent shockwaves through the academic world, prompting a cascade of fiery condemnations and expressions of profound unease. Scholars from various disciplines voiced their outrage, perceiving the feature as a grotesque violation of academic integrity and personal dignity. Vanessa Heggie, an associate professor in the history of science and medicine at the University of Birmingham, articulated a widespread sentiment in a strongly worded LinkedIn post, stating, “Grammarly is now offering ‘expert review’ of your work by living and dead academics. Without anyone’s explicit permission it’s creating little LLMs based on their scraped work and using their names and reputation.” Her words underscored a critical concern: the unauthorized creation of digital doppelgangers, leveraging the intellectual labor and established reputations of scholars without their consent, or, in the case of the deceased, without the consent of their estates or next of kin. This unauthorized digital resurrection, critics argue, represents a fundamental breach of intellectual property rights and an alarming precedent for how AI might exploit human legacies. Claire E. Aubin, a historian and host of the “This Guy Sucked” podcast, captured the collective sense of horror and disbelief in a now-viral Bluesky post, declaring, “I have seen a lot of cursed stuff in my time in academia but this is among the most cursed.” This sentiment resonates deeply within a community that values intellectual rigor, attribution, and personal agency, finding itself confronted with a technology that appears to disregard these foundational principles.

Grammarly, in its own promotional materials, describes “Expert Review” as an advanced AI agent designed to assist users in “meet[ing] the expectations of your discipline and your project by drawing on insights from subject-matter experts and trusted publications.” This tool is presented as an integral component of Grammarly’s expanded suite of AI functionalities, which were rolled out last summer. The operational mechanics are straightforward, yet ethically fraught: users upload their document to Grammarly’s AI platform, select the “Expert Review” agent, and then choose their “expert” of choice from a curated list. The AI subsequently generates suggestions and, perhaps most controversially, can even produce revised versions of the user’s writing based on these AI-generated recommendations. Grammarly’s website confidently asserts, “Revise the draft yourself or let Expert Review rework things for you.” This promise of AI-driven revision, while seemingly convenient, raises significant questions about authorship, originality, and the pedagogical value of genuine intellectual struggle and human feedback. The core invasiveness of this tool stems from its brazen impersonation of real academics, presenting AI-generated feedback under their names, thereby lending an unearned authority to algorithms. This impersonation is compounded by the inherent ethical quandaries surrounding the training data for large language models (LLMs), which are notoriously built upon vast quantities of scraped text from the internet, often without explicit consent or compensation to the original creators. The fact that this particular iteration of Grammarly’s AI extends its impersonation to deceased professors, in the eyes of many scholars, elevates the transgression from a mere ethical lapse to a grievous insult, bordering on the sacrilegious.

The term “necromancy” has become a potent descriptor for the profound ethical breach perceived by academics. Kathleen Alves, an associate professor of English at CUNY, minced no words in a Bluesky post, labeling the feature as “literally digital necromancy.” This powerful metaphor encapsulates the discomfort and indignation felt by many who see the digital resurrection of deceased scholars as a violation of their digital legacy and a profound disrespect for their intellectual contributions. Hisham Zerriffi, an associate professor in forest resources management at the University of British Columbia, echoed this sentiment, stating, “NecromancerLLM. Seriously, dead or alive, this is just wrong.” The collective outrage underscores a deeper anxiety about the commodification of human identity and intellect in the age of AI. When an AI tool purports to speak with the voice and authority of a deceased scholar, it raises uncomfortable questions about posthumous rights, the control of one’s intellectual property beyond the grave, and the very essence of human contribution to knowledge. The absence of explicit consent from living academics, let alone the impossibility of obtaining it from the deceased, transforms the “Expert Review” feature into a stark symbol of AI’s potential to disregard human autonomy and ethical boundaries in its relentless pursuit of utility and innovation.

Beyond the immediate shock, the controversy surrounding Grammarly’s “Expert Review” delves into a complex web of intellectual property rights, academic integrity, and the very nature of scholarly discourse. The use of “scraped work” to train LLMs, which then generate feedback in the persona of the original authors, implicitly bypasses traditional copyright protections and intellectual ownership. Academics invest years, often decades, in developing their expertise, cultivating their unique voices, and building their reputations. To have an AI model synthesize their work and then deliver generic, algorithmically-derived feedback under their names, without permission or acknowledgment, diminishes the value of their original contributions and potentially misrepresents their nuanced perspectives. There’s a tangible fear that such tools could erode trust in academic review processes, which traditionally rely on human judgment, peer interaction, and the accountability of identifiable individuals. The potential for an AI, however sophisticated, to misinterpret or misrepresent a scholar’s body of work, all while operating under their name, presents a significant risk to academic credibility.

This isn’t an isolated incident for Grammarly; it appears to be part of a broader strategy to integrate AI deeply into the educational ecosystem, often in ways that challenge established norms. The “Expert Review” agent stands alongside another controversial tool: an “AI grader agent” that offers students personalized feedback on their homework by consulting “publicly available instructor information” on their teachers and professors. While seemingly innocuous, this feature raises privacy concerns for educators and prompts questions about the authenticity of student learning when feedback is generated by an AI attempting to mimic a specific instructor’s style or expectations. Furthermore, the broader landscape of AI in education is replete with tools that push the boundaries of academic integrity, as exemplified by reports of “AI agents logging directly into college platforms like Canvas to do homework.” This trend suggests a burgeoning market for AI solutions that bypass the traditional learning process, potentially undermining the development of critical thinking, research skills, and genuine authorship among students. The increasing proliferation of such tools creates a challenging environment for educators grappling with plagiarism detection, the equitable assessment of student work, and the fundamental purpose of higher education itself.

The ethical vacuum in which much of AI development currently operates is a significant contributing factor to controversies like this. Companies like Grammarly, while undoubtedly aiming to innovate and provide helpful tools, appear to be moving forward without robust ethical frameworks that prioritize consent, transparency, and respect for human identity and intellectual property. The development of AI models that create digital personas, especially of the deceased, necessitates a profound reevaluation of digital legacy and posthumous rights. Who owns the digital ghost of a scholar? Who decides how their intellectual contributions are re-contextualized and utilized by algorithms? There is an urgent need for clearer guidelines, possibly even legislative action, that address these complex questions. Without explicit opt-in mechanisms, clear disclosure of AI’s role, and robust protections for intellectual property and personal identity, such technologies risk fostering a climate of distrust and ethical ambiguity, where the line between assistance and appropriation becomes dangerously blurred.

Looking ahead, the Grammarly controversy serves as a critical inflection point in the ongoing dialogue about the intersection of AI and academia. While AI undoubtedly holds immense potential to revolutionize research, writing, and learning, its integration must be guided by a strong ethical compass and a deep respect for human values. The “necromancy” accusation is not merely a sensational headline; it encapsulates a profound concern that in the rush to innovate, the humanity at the core of intellectual endeavor is being overlooked, or worse, exploited. The role of human experts, with their nuanced understanding, critical judgment, and lived experience, cannot be fully replicated by algorithms, however advanced. The future of academia in an AI-driven world will depend on finding a delicate balance: harnessing the power of AI to augment human capabilities, while rigorously upholding the principles of integrity, consent, and the irreplaceable value of genuine human scholarship. This incident is a stark reminder that as AI becomes more sophisticated, the ethical considerations surrounding its deployment must evolve at an even faster pace, ensuring that technological progress serves humanity without diminishing its essence.