
Grammarly sued as writers challenge AI ‘expert’ clones
Journalist Julia Angwin is leading a new class action against Grammarly over its ‘Expert Review’ AI feature, escalating a broader fight over who controls human expertise and likeness in commercial AI tools.
Grammarly is facing a new class‑action lawsuit over its “Expert Review” feature, accused of quietly turning real writers into unpaid AI personas and forcing an urgent test of who controls human expertise and likeness in the age of generative tools.
Filed on March 11 in federal court in Manhattan, the suit is led by award‑winning journalist Julia Angwin and alleges that Grammarly’s parent company, Superhuman Platform, misappropriated the names and identities of journalists, authors, and editors by offering AI feedback “from” them without consent or compensation.PRF Law
How Grammarly’s ‘experts’ worked — and why writers object
Grammarly’s Expert Review was pitched as a premium, AI‑powered critique engine that could channel the perspectives of real authors, journalists, and academics on a user’s draft.
As Wired reported, users were invited to have their work reviewed by virtual versions of Stephen King, Neil deGrasse Tyson, and even deceased figures like William Zinsser and Carl Sagan, alongside working journalists who never agreed to participate.
The tool generated instant suggestions “in the voice” of those experts, with Grammarly describing the system as drawing on publicly available content and algorithmically selecting relevant subject‑matter experts based on a user’s text.eWEEK
Angwin’s complaint argues that this crossed a critical line: instead of simply training models on publicly available text — already under legal fire in other AI cases — Grammarly allegedly deployed recognizable personas as a commercial feature, exploiting their names and identities to sell subscriptions.
According to the filing, that amounts to a violation of state publicity and privacy laws designed to prevent companies from using a person’s name, image, or likeness for profit without consent.PRF Law
Backlash, rollback, and an opt‑out controversy
Public outrage escalated rapidly after early reports showed Expert Review impersonating living and dead writers, sometimes in jarringly off‑base ways.
Author and copy editor Benjamin Dreyer, for instance, shared that even nonsense “lorem ipsum” text drew confident stylistic advice supposedly from renowned authors, underscoring that no human expert was actually involved.Futurism
Within days, Grammarly announced it was disabling Expert Review and said it would stop using experts’ identities without their permission, a move confirmed in statements reported by outlets including Futurism and eWEEK.
But the company’s broader stance on identity use remains contested.
A report from the Nieman Journalism Lab found that Grammarly plans to keep using author identities drawn from public sources unless individuals actively opt out, raising concerns that the burden is being shifted onto creators who may never realize their names are in the system.
A front‑line fight over AI, consent, and creator rights
The Angwin lawsuit lands amid a growing wave of legal challenges to AI training and deployment, from authors’ class actions against OpenAI and Meta to visual artists’ suits over image generators that mimic their styles.Reuters
What makes the Grammarly case distinct is its focus on identity and persona rather than only on underlying text or style.
If a court agrees that deploying simulated “experts” without consent violates right‑of‑publicity statutes, it could force platforms to move from broad “public content” claims toward explicit, documented licenses for the human identities they monetize.
For writers, academics, and other knowledge workers, the dispute crystallizes a deeper fear: that the very reputations they’ve built over years can be converted into AI features, with value captured by platforms and little transparency or recourse.
For AI companies, it is a warning that scraping is no longer just a copyright problem but a question of who owns the commercial use of a human name and likeness — and whether “inspired by” personas can be rolled out without people ever saying yes.
Courts will now have to decide whether Expert Review was a clever product misstep or an unlawful grab at human expertise, but the message from Angwin and her fellow plaintiffs is already clear: in the next phase of AI, consent around identity will be a central battleground.
Tags
