Grammarly’s ‘Expert Review’ AI Sparks Identity Backlash

Grammarly’s ‘Expert Review’ AI Sparks Identity Backlash

Grammarly’s new “expert review” agent is drawing fire for imitating real scholars and writers—some recently deceased—without consent, raising fresh alarms over AI identity abuse.

4 min read724 wordsby writer-0

Grammarly’s new “expert review” feature is facing mounting backlash after academics and writers discovered the AI tool was offering feedback in the persona of named experts, including recently deceased scholars, without their consent—a move critics say blurs the line between inspiration and identity theft.

The controversy centers on an AI agent that invites users to have their work “reviewed” by specific historians, linguists and stylists, dead or alive, then generates feedback that appears to channel those individuals’ expertise. Reporting by Wired and follow‑up tests by other outlets found the system invoking prominent figures such as medieval historian David Abulafia, who died in January 2026, and writing expert William Zinsser, who died in 2015, as available reviewers for user drafts, despite no affiliation with Grammarly or its parent brand Superhuman.

How Grammarly’s ‘Expert Review’ Actually Works

The feature sits alongside a suite of generative AI agents Grammarly has rolled out over the past year, including tools for paraphrasing, grading, humanizing and detecting AI‑like text, all powered by large language models under the hood, as the company discloses in its own support documentation on the AI Detector and writing agents.

When a user selects Expert Review, the interface surfaces a carousel of real‑world experts—authors, professors, and stylistic authorities—framed as available to provide a tailored critique. As Wired describes it, the resulting feedback “reads like” commentary from those individuals and is pitched as being “inspired by” their publicly available work rather than directly authored by them, with a disclaimer noting that the experts are not affiliated with Grammarly and do not endorse the product.

In practice, however, screenshots and tests compiled by Cybernews show the tool explicitly naming scholars like Abulafia and Latin historian Katharina Krebs as options within a menu of “experts,” even though there is no evidence they were asked for or granted permission. According to Cybernews, Grammarly described these as AI agents trained on publicly accessible texts by each scholar in order to produce domain‑specific guidance.

Consent, Misrepresentation and Legal Risk

AI ethicists and legal scholars argue that surfacing real, identifiable people as selectable “agents” crosses a line that generic style emulation does not. While AI models routinely learn from large corpora that include human‑authored work, actually labeling an output as coming from or being reviewed by a named person—especially one who is deceased—risks misleading users about the provenance and authority of the advice.

Experts told Wired and Cybernews this could shade into misappropriation of likeness or false endorsement, areas where right‑of‑publicity and consumer protection laws may apply even when no literal image is used. The fact that some named experts are no longer alive further complicates any notion of consent and raises questions about who, if anyone, can authorize such uses of a scholar’s professional identity.

Academics also worry the feature encourages a kind of synthetic citation culture, where students and professionals may treat AI‑generated feedback as carrying the weight of a renowned expert’s approval. That, they warn, could further erode trust at a time when universities and employers are already struggling to set boundaries around Grammarly and other AI‑assisted tools in coursework and professional writing, as earlier policy debates at institutions like Notre Dame have shown.

Grammarly’s Silence and the Broader AI Imitation Problem

As of March 6, 2026, Grammarly has not issued a detailed public statement addressing the specific complaints about Expert Review’s use of real names, beyond existing generic disclaimers that references to experts are informational and do not signal endorsement. Several outlets note that the company did not respond to requests for comment on whether it obtained any permissions or plans to change the feature’s design.

The backlash lands amid a broader reckoning over “identity emulation” in generative AI—from music models trained on specific singers’ voices to chatbots that impersonate living influencers and therapists. For platforms like Grammarly, which are embedded deeply into classrooms and workplaces, the stakes are unusually high: if users come to see expert personas as little more than branding veneers on generic language models, the trust that made these tools ubiquitous could be hard to win back.

Until companies draw clearer consent lines around who can be turned into an AI “expert,” critics argue, every familiar name appearing in a drop‑down menu is a potential reputational liability—for the human behind it, and for the AI service that decided to borrow their authority.

Tags

#ai ethics#grammarly#identity#generative ai#education#trust