Grammarly Forgot to Mention Something in Its Giant Apology That Changes the Whole Story
A recent saga involving Grammarly’s “Expert Review” feature has unfolded, revealing a profound ethical misstep by the AI-powered writing assistant and its parent company, Superhuman. What began as a quiet rollout of a supposedly innovative service quickly devolved into a maelstrom of outrage, culminating in a public apology that conspicuously omitted the most critical development: a multi-million dollar class-action lawsuit. This omission fundamentally alters the narrative, transforming a story of corporate contrition into one of legal compulsion.
The controversial “Expert Review” feature, subtly introduced last year, promised users access to “can’t-miss innovations from the bleeding edge of science and tech,” as implied by the company’s promotional material. The idea was to leverage artificial intelligence to provide “expert” feedback on written content, ostensibly drawing insights from established professionals. However, the execution proved disastrously flawed. Instead of merely referencing or summarizing expert knowledge, Grammarly’s AI created virtual doppelgängers, generating critiques and suggestions under the names and implied authority of real journalists, authors, and academics – all without their explicit consent or even their knowledge. This digital impersonation angered countless professionals who discovered their identities and hard-earned expertise were being commercially exploited, their reputations potentially compromised by an AI agent doling out advice in their name.
The initial response from Superhuman, Grammarly’s parent company, to the growing chorus of complaints was inadequate and dismissive. When confronted with evidence of impersonation, the company reportedly suggested that those affected should “email the company to opt out” – a solution that struck many as an unacceptable burden on victims of digital identity theft. The idea that individuals whose identities were misappropriated should proactively seek to remove themselves from a system they never consented to join underscored a deep disconnect between the tech company’s operational ethics and the expectations of intellectual property and personal autonomy. This tone-deaf response only fueled the already enormous backlash, painting Grammarly as a company unwilling to take full responsibility for its actions.
Following this wave of public condemnation, a sudden reversal was inevitable. Superhuman CEO Shishir Mehrotra issued a public apology in a wordy LinkedIn post on a Wednesday, attempting to quell the mounting anger. “Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices,” Mehrotra wrote. “We hear the feedback and recognize we fell short on this.” While the apology acknowledged the “critical feedback” and admitted the company “fell short,” it presented the retreat as a response solely to public outcry. The carefully crafted language aimed to convey humility and a commitment to learning from mistakes. However, what Mehrotra conspicuously failed to mention – a detail that fundamentally recontextualizes the entire apology – was that the company wasn’t just contending with hundreds of furious writers; it was simultaneously staring down the barrel of significant litigation.
Indeed, almost concurrently with Mehrotra’s public statement, a powerful legal challenge was being mounted. Julia Angwin, the esteemed editor-in-chief of the nonprofit news organization The Markup, filed a class-action lawsuit in the Southern District of New York on that very Wednesday afternoon. This timing is crucial; it suggests that Grammarly’s apology may have been less an act of spontaneous remorse and more a strategic maneuver in response to an imminent legal threat. The lawsuit, filed by prominent attorney Peter Romer-Friedman, challenges Grammarly’s “misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors to earn profits for Grammarly and its owner, Superhuman.”
The legal filing doesn’t specify an exact amount for damages but indicates the sum is “at least $5 million,” signaling the severity and widespread nature of the alleged harm. Angwin, a Pulitzer Prize finalist known for her investigative journalism, articulated the personal and professional impact of this appropriation in a powerful statement: “I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise.” Her words underscore the profound violation felt by professionals whose identities and intellectual labor were commodified without their consent. The suit alleges that Grammarly’s actions constitute a direct infringement on the rights of these professionals, turning their established credibility into a profit-generating tool for the company.
Superhuman’s internal communications prior to the lawsuit’s public filing further illustrate their initial perspective and subsequent pivot. In a statement provided to Wired just before the claim was officially filed, Ailian Gan, Superhuman’s director for product management, explained the company’s original intent: “We built the agent to help users tap into the insights of thought leaders and experts and to give experts new ways to share their knowledge and reach new audiences.” While attempting to frame the feature as beneficial, Gan conceded, “Based on the feedback we’ve received, we clearly missed the mark. We are sorry and will do things differently going forward.” This apology, issued before the full weight of the lawsuit was publicly known, carries a different tone than Mehrotra’s later, broader apology, perhaps reflecting a gradual realization of the depth of their miscalculation.
Peter Romer-Friedman, Angwin’s lawyer, expressed strong confidence in the merit of the class-action lawsuit. He highlighted the straightforward legal grounds: “Legally, we think it’s a pretty straightforward case.” This confidence stems from existing state laws in New York and California, which explicitly prohibit the commercial use of a person’s name and likeness without their express permission. These laws are designed precisely to protect individuals from the kind of unauthorized exploitation alleged in the Grammarly case, making the legal battle a clear test of established intellectual property and privacy rights in the age of AI.
Beyond the specifics of Grammarly’s “Expert Review,” Romer-Friedman touched upon a much larger and increasingly urgent societal issue: the widespread practice of large language models (LLMs) scraping vast quantities of copyrighted materials from the internet. This data collection, often conducted without the required licenses or compensation, has triggered a litany of other lawsuits against prominent AI developers like OpenAI, Stability AI, and Midjourney. The Grammarly case, therefore, serves as a microcosm of this broader struggle, pitting individual creators and their intellectual property against powerful tech companies eager to train their AI systems on the world’s knowledge. “More broadly, one of the reasons why we’re filing this case is, you know, we can see what’s happening in our society: that lots of professionals who spend years, or in Julia’s case decades, honing a skill or a trade, then see that their name or their skills are being appropriated by others without their consent,” Romer-Friedman told Wired. This statement frames the lawsuit not just as a fight for individual rights but as a crucial stand for the value of human expertise in an increasingly automated world.
Angwin herself recounted her personal discovery of the impersonation, learning about her virtual twin through Casey Newton’s Platformer. The experience was not only distressing but also professionally insulting. She revealed to Wired that her AI-generated counterpart was doling out “horrible advice,” creating “unwieldy sentences that made it harder to understand.” Her candid assessment – “I was surprised at how bad it was” – added another layer of injury, highlighting the qualitative deficiency of the AI’s “expertise” compared to her own carefully cultivated skills. This detail underscores the inherent problem with AI impersonation: not only is it unauthorized, but the quality of the output can also be subpar, further damaging the reputation of the person being mimicked.
The unfolding events surrounding Grammarly’s “Expert Review” feature represent a pivotal moment in the ongoing conversation about AI ethics, intellectual property, and corporate accountability. What began as a seemingly benign, albeit misguided, attempt at innovation quickly escalated into a legal battle that could set significant precedents for how AI companies interact with human creators and their rights. Grammarly’s belated apology, delivered without acknowledging the immediate legal pressure, reveals a company attempting to control the narrative while facing a direct challenge to its business practices. This case, alongside others, forces a critical examination of the “move fast and break things” ethos when “things” include the livelihoods and identities of professionals.

