Grammarly Parent Company Superhuman Facing Class Action Lawsuit Over Unconsented Use of Author Names in AI Tool

A federal class action lawsuit has been filed in the Southern District of New York against Superhuman, the technology company behind the prominent writing assistant software Grammarly, over an artificial intelligence feature that allegedly misappropriated the identities of hundreds of writers. The litigation centers on an "Expert Review" tool that provided users with editing suggestions and critiques presented as the voices of renowned journalists, authors, and academics. According to the complaint, these individuals never consented to their names, reputations, or intellectual styles being integrated into the commercial product.

The lawsuit was initiated by Julia Angwin, a highly respected investigative journalist and the founder of the nonprofit news organization The Markup. Angwin, who serves as the lead plaintiff, argues that Superhuman and Grammarly leveraged the prestige of established professionals to monetize AI-generated content without authorization or compensation. While the suit does not specify a total damage amount, it asserts that the aggregate claims for the proposed class—which includes hundreds of other writers and editors—exceed $5 million.

The Mechanics of the Expert Review Feature

The controversy stems from a suite of AI-powered widgets introduced by Superhuman last year. Among these was the "Expert Review" agent, which claimed to offer users insights from "thought leaders." The tool allowed users to submit their writing and receive feedback simulated to match the style and expertise of specific public figures. Among the names utilized by the platform were literary giant Stephen King, astrophysicist Neil deGrasse Tyson, and Angwin herself.

Technically, the tool functioned by utilizing an underlying large language model (LLM) trained to mimic the stylistic nuances and known viewpoints of these figures. When a user engaged the tool, the AI would generate a critique under the banner of the selected "expert." While the software included a disclaimer stating that the individuals cited had not personally endorsed or participated in the development of the tool, the lawsuit contends that this does not absolve the company of legal liability for using their names for commercial gain.

The complaint alleges that the AI agents did more than just mimic style; they essentially "regurgitated" the life’s work and professional personas of the plaintiffs. Writers who tested the tool, including several journalists from WIRED, expressed alarm at seeing their professional identities transformed into automated features of a subscription-based service.

Chronology of the Dispute

The legal conflict follows a period of rapid AI deployment and subsequent professional pushback. The timeline of events leading to the federal filing is as follows:

  • Late 2023: Superhuman expands its AI capabilities, integrating a variety of generative tools into the Grammarly platform, including the "Expert Review" feature. The tool is marketed as a way for users to "tap into the insights" of famous writers.
  • Early 2024: Journalists and authors begin discovering their names listed as available "AI agents" within the software. Public discourse on social media and in professional circles grows increasingly critical of the lack of consent.
  • May 2024: Investigative reports, most notably by WIRED, highlight the frustrations of writers whose likenesses were being used. The reports reveal that many "experts" listed were unaware of their inclusion in the product.
  • Mid-May 2024: In response to the mounting backlash, Superhuman announces it will disable the feature. Ailian Gan, Superhuman’s director for product management, issues a public apology, stating the company "missed the mark."
  • Wednesday Afternoon: The formal class action lawsuit is filed in the Southern District of New York, seeking to hold the company accountable for the period during which the tool was active and to prevent future unauthorized use of professional identities.

Legal Basis and the Right of Publicity

The lawsuit rests on long-standing legal principles regarding the "Right of Publicity." Under the laws of both New York and California—the two jurisdictions most relevant to this case—it is illegal to use a person’s name, voice, signature, photograph, or likeness for commercial purposes without prior written consent.

Peter Romer-Friedman, the attorney representing Angwin, emphasizes that the case is legally "straightforward." The core of the argument is that Superhuman used the reputations of these professionals as a marketing hook to drive subscriptions and engagement for its software. Romer-Friedman notes that the misappropriation of a professional’s name is particularly egregious when that name is the primary currency of their career.

For a journalist like Julia Angwin, her name is inextricably linked to her credibility and the quality of her investigative work. The lawsuit argues that by attributing AI-generated advice to her, Grammarly not only exploited her fame but potentially damaged her professional reputation by "attributing words to them that they never uttered and advice that they never gave."

Corporate Defense and Response

Superhuman has signaled its intent to fight the lawsuit vigorously. In an official statement, CEO Shishir Mehrotra characterized the claims as being "without merit." Despite the legal stance, the company’s actions suggest an internal acknowledgment that the product’s rollout was flawed.

Ailian Gan’s statement prior to the filing indicated that the company is "reimagining" the feature. The goal of the redesign, according to the company, is to provide experts with "real control" over how they are represented. This shift reflects a broader trend in the tech industry where companies are moving away from the "move fast and break things" approach to AI training and toward a model that involves explicit licensing or opt-in agreements for creators.

Data and Industry Context: The AI Ethics Crisis

The lawsuit against Superhuman arrives at a pivotal moment for the generative AI industry. The legal landscape is currently being reshaped by a wave of litigation brought by creators who feel their work has been exploited to train and power AI models.

Supporting data highlights the scale of this tension:

  1. Litigation Trends: In the past 18 months, at least a dozen high-profile lawsuits have been filed against AI developers, including the New York Times’ suit against OpenAI and Microsoft, and the Authors Guild’s class action involving George R.R. Martin and John Grisham.
  2. Market Growth: The market for AI writing assistants is projected to reach several billion dollars by 2030. Companies are under immense pressure to differentiate their products, often leading to the integration of "celebrity" or "expert" personas to enhance user experience.
  3. Public Sentiment: A 2023 survey of professional writers found that over 85% were concerned about AI’s impact on their future earnings, with "unauthorized use of likeness" ranking as a top-tier concern alongside copyright infringement.

This case is unique because it focuses specifically on the "Right of Publicity" rather than copyright. While many AI lawsuits focus on the data used to train the models, the Superhuman case focuses on the output and branding—the act of putting a specific human name on an AI’s work.

Broader Implications for the Tech Sector

The outcome of Angwin v. Superhuman could have profound implications for how AI agents are branded in the future. If the court finds in favor of the plaintiffs, it would establish a clear boundary: AI companies cannot use the names of real people to "personify" their software without a licensing agreement.

This would likely force a change in the business models of many AI startups. Instead of offering "virtual versions" of famous figures, companies may have to rely on generic personas or enter into costly partnerships with estate holders and living professionals. For the journalism industry, the case represents a stand against the "devaluation of the expert." As Angwin’s lawsuit notes, the appropriation of a trade or skill honed over decades by a machine that uses the original creator’s name is viewed as a fundamental threat to the viability of creative professions.

Furthermore, the case highlights the limitations of disclaimers. The fact that Grammarly included a note saying the experts did not endorse the tool may not be enough to satisfy "Right of Publicity" statutes if the commercial benefit derived from using those names is proven.

Conclusion and Future Outlook

As the Southern District of New York begins its review of the complaint, the tech industry will be watching closely. The case serves as a warning that even well-intentioned features—designed, as Superhuman claimed, to "share knowledge"—can run afoul of the law if they bypass the consent of the individuals whose identities make that knowledge valuable.

For now, the "Expert Review" feature remains disabled. Superhuman’s promise to "do things differently going forward" suggests a future where AI interactions are more transparent and legally grounded. However, for Julia Angwin and the hundreds of other writers included in the class action, the focus remains on obtaining restitution for what they describe as a blatant commercial exploitation of their names and life’s work. The resolution of this case will likely define the parameters of "digital identity" in the age of generative artificial intelligence.

More From Author

Harry Potter and the Philosopher’s Stone Series Trailer Shatters HBO and HBO Max Viewership Records Ahead of 2026 Premiere

A24 Cinematic Sensation Undertone Redefines Horror Through the Lens of Grief and Sound Design

Leave a Reply

Your email address will not be published. Required fields are marked *