The intersection of generative artificial intelligence and intellectual property law has reached a new flashpoint as Superhuman, the technology firm behind the widely used writing enhancement software Grammarly, faces a high-stakes class action lawsuit. Filed in the United States District Court for the Southern District of New York, the legal action centers on a controversial AI-driven feature known as "Expert Review." This tool allegedly misappropriated the names, reputations, and professional identities of hundreds of prominent journalists, authors, and academics to provide automated editing suggestions, all without the consent of the individuals involved.
The lead plaintiff in the case is Julia Angwin, a Pulitzer Prize-winning investigative journalist and the founder of the nonprofit newsroom The Markup. Angwin, whose career has been defined by holding tech giants accountable for privacy violations and algorithmic bias, alleges that Superhuman and Grammarly traded on her hard-earned professional reputation for corporate profit. The lawsuit seeks to represent a broad class of writers and thinkers—including literary icons like Stephen King and popular scientists like Neil deGrasse Tyson—whose names were integrated into the software’s interface as "virtual editors." While the complaint does not specify a total sum for individual damages, it asserts that the aggregate claims for the plaintiff class exceed $5 million, meeting the threshold for federal jurisdiction under the Class Action Fairness Act.
The Mechanics of the Expert Review Feature
The core of the dispute involves an AI-powered suite of tools introduced by Superhuman over the past year. Among these was the "Expert Review" agent, designed to provide users with feedback on their writing style by mimicking the voice and editorial standards of specific famous figures. According to the complaint, users were invited to have their drafts "critiqued" by AI versions of renowned authors, living and dead.
Technically, the feature leveraged large language models (LLMs) to analyze a user’s text and generate responses that mirrored the perceived style of the selected "expert." For example, if a user selected the "Julia Angwin" persona, the AI would generate suggestions intended to reflect her investigative rigor or prose style. The lawsuit contends that this use of name and identity was not merely a tribute but a calculated commercial strategy to increase the perceived value and utility of the Grammarly platform.
While the software included a fine-print disclaimer stating that the experts cited had not endorsed the product or directly participated in its development, the plaintiffs argue that such disclaimers are insufficient under the law. The complaint alleges that the primary draw for the user was the unauthorized association with the expert’s brand, a move that constitutes a violation of the "Right of Publicity."
A Chronology of the Conflict
The friction between Superhuman and the creative community began to escalate in early May 2024, following a series of investigative reports and social media outcries.
- Late 2023: Superhuman integrates advanced AI agents into the Grammarly interface, aiming to move beyond simple grammar checking toward high-level stylistic coaching.
- April 2024: The "Expert Review" feature gains traction, featuring a library of names including investigative journalists, novelists, and scientists.
- May 2, 2024: Journalists at outlets such as WIRED and The New York Times begin noticing their names appearing in the tool. Public backlash grows on platforms like X (formerly Twitter), where writers express shock at their likenesses being used to sell AI subscriptions.
- May 15, 2024: Amidst mounting pressure and the threat of legal action, Superhuman announces it will disable the feature. Ailian Gan, Superhuman’s director for product management, issues a statement acknowledging that the company "missed the mark."
- May 22, 2024: Peter Romer-Friedman, representing Julia Angwin, officially files the class action lawsuit in the Southern District of New York.
The rapid timeline from the feature’s public exposure to the filing of a federal lawsuit underscores the increasing volatility of the relationship between AI developers and content creators.
Legal Foundations: The Right of Publicity and Commercial Misappropriation
The legal strategy employed by Angwin and her counsel, Peter Romer-Friedman, rests on established "Right of Publicity" statutes in New York and California. These laws are designed to prevent the unauthorized commercial use of an individual’s name, likeness, or persona.
In New York, Sections 50 and 51 of the Civil Rights Law provide a private right of action for any person whose name is used for "advertising purposes or for the purposes of trade" without written consent. California’s Civil Code Section 3344 offers similar protections, which are particularly relevant given Superhuman’s corporate headquarters in the state.
"Legally, we think it’s a pretty straightforward case," Romer-Friedman stated. He emphasized that the case transcends simple copyright issues. While many AI lawsuits focus on the data used to train models, this case focuses on the marketing and delivery of the AI’s output. By explicitly using the names of professionals to categorize and sell a service, the plaintiffs argue that Superhuman crossed the line from technological innovation into identity theft for commercial gain.
The complaint argues that for professionals like Angwin, who have spent decades building a brand based on trust, accuracy, and independence, having their identity co-opted by an automated tool is particularly damaging. It suggests that the AI might attribute "advice they never gave" to them, potentially harming their professional standing.
Corporate Defense and the "Meritless" Claim
Despite the public apology issued by product management, Superhuman’s executive leadership has taken a firmer stance regarding the litigation. Shishir Mehrotra, CEO of Superhuman, categorized the lawsuit’s claims as "without merit." In an official statement, Mehrotra indicated that the company intends to defend its position vigorously in court.
The company’s defense is expected to hinge on several factors:
- Transformative Use: Arguments that the AI-generated critiques are transformative works of art or technology rather than simple identity theft.
- Lack of Confusion: Assertions that the disclaimers provided enough clarity that no reasonable user would believe Julia Angwin or Stephen King were personally editing their emails.
- First Amendment Protections: Potential claims that mimicking a style is a form of protected expression, akin to parody or biographical reference.
However, the decision to pull the feature before the lawsuit was even filed suggests that the company recognized the significant reputational and legal risks involved. Ailian Gan’s statement reflected a desire to "reimagine" the feature with an opt-in model, acknowledging that experts deserve "real control over how they want to be represented."
Supporting Data and the Broader AI Landscape
The lawsuit against Superhuman is not an isolated incident but part of a tidal wave of litigation reshaping the AI industry. According to data from legal analytics firms, AI-related intellectual property filings increased by over 200% between 2022 and 2024.
- The Authors Guild vs. OpenAI: A similar high-profile case involving authors like George R.R. Martin and John Grisham, who allege their copyrighted works were used to train ChatGPT without compensation.
- The New York Times vs. Microsoft/OpenAI: A landmark case focusing on the "regurgitation" of proprietary journalistic content.
- The Sarah Silverman Case: A class action targeting Meta and OpenAI over the use of copyrighted books in AI training sets.
What distinguishes the Angwin/Superhuman case is the focus on "Identity" rather than "Content." While other suits argue about the input (the books and articles used to train the AI), this suit focuses on the labeling (the use of the author’s name as a product feature).
Market data suggests that the generative AI market is projected to reach $1.3 trillion by 2032. This massive economic potential has led many tech firms to adopt a "move fast and break things" approach to product development. However, the Angwin lawsuit suggests that the "breaking" of individual rights may no longer be tolerated by the creative class.
Professional Reactions and Industry Implications
The reaction from the journalism and publishing communities has been one of wary validation. Organizations such as the National Writers Union and the NewsGuild-CWA have expressed ongoing concerns regarding "synthetic media" and the erosion of the professional writer’s value.
For investigative journalists like Angwin, the case is a matter of principle. Having spent her career documenting the "privacy paradox" and the ways in which Silicon Valley extracts value from individuals, her role as the lead plaintiff is a natural extension of her body of work. The complaint explicitly notes that she seeks to "stop Grammarly from trading on her name" and to protect the integrity of the writing profession.
Industry analysts suggest that this case could force a major pivot in how AI companies approach "persona-based" AI. If the court rules in favor of Angwin, it would set a precedent requiring AI companies to secure explicit, likely compensated, licensing agreements before using any real person’s name or stylistic likeness in a commercial product. This would mirror the licensing models used in the music and film industries for decades.
Future Outlook
As the case moves into the discovery phase in the Southern District of New York, the legal community will be watching closely. The outcome will likely define the boundaries of "AI Style Mimicry."
If Superhuman is found liable, it could face millions of dollars in statutory damages and a permanent injunction against using such features. More importantly, it would signal to the burgeoning AI sector that the names and reputations of creators are not public domain "data points" to be harvested, but protected assets that require consent and compensation.
For now, the "Expert Review" tool remains disabled, a silent testament to the ongoing struggle between rapid technological advancement and the fundamental rights of the individuals whose work and identities power that very technology. The resolution of this suit will serve as a critical milestone in determining whether the future of AI will be built on collaboration with creators or on the unauthorized appropriation of their life’s work.




