The renowned writing assistant company, Grammarly, has been forced to retract its highly contentious "Expert Review" feature following a vociferous backlash from a broad spectrum of the writing community, including journalists, authors, and academics, who decried the tool’s audacious practice of impersonating both living and deceased writers without a shred of permission. This feature, which Grammarly marketed as a means for users to "take your writing to the next level" by gleaning suggestions "inspired by leading professionals, authors, and subject-matter experts," quickly ignited a firestorm of criticism, exposing significant ethical and identity-related quandaries inherent in the rapid evolution of large language model (LLM) technology. The core of the outrage stemmed from the platform’s brazen appropriation of personal and professional personas, creating AI-generated "reviews" that ostensibly offered advice in the style of prominent figures, effectively commodifying their intellectual identities without their consent or even knowledge.

The feature, available exclusively to subscribers of Grammarly’s $12-a-month Pro tier, promised an elevated writing experience, leveraging the supposed insights of luminaries in various fields. However, the reality proved far more unsettling. Tech journalist Kara Swisher, a prominent voice whose advice the feature claimed to channel, articulated the widespread indignation with cutting remarks, seething, "You rapacious information and identity thieves better get ready for me to go full McConaughey on you. Also, you suck." Her visceral reaction underscored the deep sense of violation felt by those whose digital identities were being co-opted for commercial gain. The controversy was further fueled by instances demonstrating the feature’s flawed and often nonsensical application. For example, author and copy editor Benjamin Dreyer revealed that when he input paragraphs of "lorem ipsum"—the standard dummy text used in design—the feature incongruously offered him writing tips attributed to the venerable novelist Stephen King, highlighting the arbitrary and often absurd nature of the AI’s "expertise." Equally disturbing was the discovery by Platformer‘s Casey Newton that a virtual version of himself was dispensing writing advice, prompting his poignant reflection: "I’ve long assumed that before too long, AI might take my job. I just assumed that someone would tell me when it happened." This sentiment encapsulated the existential dread many creative professionals face regarding AI’s encroaching presence and the potential for their work and identity to be subsumed without notice. The revelation that the feature extended its impersonations to recently deceased professors, as documented by Futurism, added a particularly macabre dimension to the ethical breach, raising questions about the respect for legacy and the exploitation of public figures even beyond their lifetime.

In response to the mounting public outcry and the avalanche of negative feedback, Shishir Mehrotra, CEO of Superhuman, Grammarly’s owner, announced the company’s decision to "disabl[e]" the problematic feature. In a LinkedIn post, Mehrotra acknowledged the "valid critical feedback from experts who are concerned that the agent misrepresented their voices," stating, "This kind of scrutiny improves our products, and we take it seriously." He further conceded, "We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we’ll rethink our approach going forward." The company’s stated intention is to "reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented — or not represented at all." While this swift retraction and apology indicate a recognition of the severe misstep, the incident leaves a lasting impression regarding the ethical blind spots that can emerge in the pursuit of AI innovation and commercial advantage. The question remains whether this public apology and feature disablement will be sufficient to fully assuage the anger and mistrust generated, or if the episode will serve as a lasting scar on Grammarly’s reputation, especially among the very writers it purports to assist.

This controversy serves as a critical flashpoint in the broader discourse surrounding AI ethics, identity, and intellectual property in the digital age. The "Expert Review" feature, in its original incarnation, epitomized several persistent pain points plaguing large language model-based tools. Firstly, it highlighted the fundamental issue of consent in the age of AI. The unauthorized use of individuals’ names, styles, and implied endorsements—even if algorithmically generated—raises serious questions about personal autonomy and the right to control one’s digital persona. As AI models become increasingly sophisticated at mimicking human expression, the line between inspiration and impersonation blurs, necessitating clearer ethical guidelines and robust legal frameworks. Secondly, the incident underscored the "black box" nature of many LLMs. Users and even the company itself seemed to have a limited understanding of how these "expert reviews" were generated, or why specific authors were associated with arbitrary advice (like Stephen King on lorem ipsum). This lack of transparency undermines trust and makes it difficult to ascertain the veracity or origin of the AI’s output, especially when it purports to channel specific human intelligence. The disclaimer, "buried deep in the company’s documentation" as Platformer‘s Casey Newton observed, stating that references from these experts "are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities," only exacerbated the problem. This legalistic hedge appeared to directly contradict the feature’s marketing, which implicitly traded on the credibility and reputation of these prominent figures, creating a deceptive user experience while attempting to shield the company from liability. Such practices erode consumer trust and highlight the need for greater honesty and clarity in how AI products are positioned and advertised.

The "Expert Review" debacle also brings into sharp focus the ongoing tension between AI as a transformative tool for assistance and its potential to displace or even usurp human creative endeavors. For many writers, the idea of an AI impersonating their style or offering "their" advice without permission feels like a profound violation of their craft and intellectual labor. This fear is not merely about job displacement but about the very essence of authorship and individual voice. As AI continues to evolve, creating ever more convincing imitations of human creativity, the need for robust protections for artists, writers, and thinkers will become paramount. This includes establishing clear rules around data sourcing for AI training, ensuring fair compensation where applicable, and, crucially, enshrining the right of individuals to control their digital identities and prevent unauthorized appropriation. The rapid pace of AI development often outstrips the ethical and legal frameworks necessary to govern its use responsibly, leading to incidents like this one. Grammarly’s experience serves as a stark reminder that technological innovation, however powerful, must be tempered by a deep respect for human rights, consent, and the integrity of creative work. The company’s promise to "reimagine" the feature with expert control is a necessary step, but the path forward for AI developers will undoubtedly involve navigating increasingly complex ethical landscapes, with public scrutiny acting as a vital check on unchecked technological ambition. The incident highlights that for AI to truly augment human capabilities, it must first learn to respect human identity and autonomy.