The Ethics of AI Persona Simulation: Grammarly’s Rebrand to Superhuman and the Controversy Over Unauthorized Expert Reviews

The landscape of digital writing assistance has undergone a radical transformation as Grammarly, the long-standing leader in automated proofreading, transitions into a new era under the corporate banner of Superhuman. This rebranding, announced in October by CEO Shishir Mehrotra, signals a shift from a tool primarily focused on syntax and spelling to an expansive generative AI "partner." However, the integration of a new feature titled "Expert Review" has ignited a firestorm of ethical and legal debate. The tool purports to offer feedback simulated by high-profile academics and authors—both living and deceased—without obtaining their consent or establishing formal affiliations.

The rebranding to Superhuman is intended to reflect a "new suite of AI-powered products" that aim to automate the creative and professional writing processes. According to Mehrotra, the goal is to make the technology feel "ordinary" by embedding "extraordinary" capabilities beneath the surface. While the core writing interface remains branded as Grammarly, the underlying engine now powers a range of features including an AI chatbot for real-time drafting, a "paraphraser" for stylistic adjustments, a "humanizer" to mask AI-generated patterns, and an AI grader that predicts academic performance. The most controversial among these is the "Expert Review" agent, which leverages Large Language Models (LLMs) to mimic the critical voices of specific intellectual figures.

The "Expert Review" Mechanism and Unauthorized Personas

The "Expert Review" feature functions by allowing users to solicit critiques from virtual versions of renowned thinkers. The roster of available "experts" includes living figures such as horror novelist Stephen King, astrophysicist Neil deGrasse Tyson, and cognitive scientists Steven Pinker and Gary Marcus. Perhaps more contentiously, the system also includes deceased luminaries, such as William Zinsser, author of the seminal "On Writing Well," astronomer Carl Sagan, sociologist Pierre Bourdieu, and historian David Abulafia.

Grammarly provides a disclaimer stating that references to these experts are for "informational purposes only" and do not indicate an endorsement or affiliation. Jen Dakin, senior communications manager at Superhuman, defended the feature by clarifying that the agent "doesn’t claim endorsement or direct participation from those experts; it provides suggestions inspired by works of experts and points users toward influential voices." Despite this, the use of specific names to market an AI-generated service has raised questions regarding the "Right of Publicity" and the ethical boundaries of "reanimating" deceased individuals for commercial gain.

A Chronology of Grammarly’s Evolution

To understand the current controversy, it is necessary to examine the trajectory of Grammarly’s development from a niche academic tool to a global tech giant.

  • 2009: Grammarly is founded in Kyiv, Ukraine, by Max Lytvyn, Alex Shevchenko, and Dmytro Lider. Its initial focus is helping students improve their grammar and prevent plagiarism.
  • 2011-2017: The company expands into a browser extension and a desktop application, transitioning into the enterprise market. It secures its first round of institutional funding ($110 million) in 2017.
  • 2019-2021: Grammarly reaches "decacorn" status, valued at over $13 billion. It begins integrating more sophisticated "tone detection" features.
  • 2022-2023: The emergence of ChatGPT and other LLMs forces a strategic pivot. Grammarly launches "GrammarlyGO," its first major foray into generative AI.
  • October 2024: The company announces its rebrand to Superhuman, signaling a focus on "human-AI collaboration" and the rollout of the "Expert Review" tool.
  • Late 2024: Academic and literary communities begin to voice public opposition to the unauthorized use of scholar personas within the platform.

Supporting Data: The Rise of AI in Academia and Creative Fields

The introduction of these features occurs against a backdrop of increasing AI adoption and subsequent friction in educational and creative sectors. Data from Turnitin, a leading plagiarism detection service, indicates that between April 2023 and April 2024, over 200 million papers were reviewed for AI presence, with approximately 11% containing at least 20% AI-generated text. This suggests a massive demand for tools that can refine or "humanize" AI content—a demand Superhuman is now actively courting.

Furthermore, the legal landscape surrounding the "scraping" of authorial work is currently being tested in the courts. Several high-profile lawsuits, including Authors Guild v. OpenAI and The New York Times v. Microsoft, are examining whether using copyrighted material to train LLMs constitutes "fair use." Superhuman’s "Expert Review" adds a new layer to this conflict by not just using the data, but by explicitly using the authors’ names as a product feature.

Reactions from the Academic and Intellectual Community

The reaction from the academic community has been largely critical. Vanessa Heggie, an associate professor of the history of science and medicine at the University of Birmingham, recently publicized her concerns on LinkedIn. She highlighted the case of David Abulafia, a distinguished historian who passed away in January 2024. Heggie described the creation of an AI model based on his "scraped work" so soon after his death as "obscene," arguing that the company is trading on the reputations of scholars without their consent.

C.E. Aubin, a historian and postdoctoral fellow at Yale University, echoed these sentiments, stating that the "expert" system validates a "profound mistrust" of AI within the humanities. Aubin argues that reducing scholarship to a set of algorithmic concepts "eliminates personhood" and insults the labor of actual thinkers who are currently facing institutional and economic pressures. "These are not expert reviews," Aubin noted, "because there are no experts involved in producing them."

The literary world has been similarly protective of its intellectual property. While Stephen King has previously expressed a pragmatic, albeit cautious, view of technological progress, many other authors have joined class-action lawsuits to prevent their "oeuvres" from being used to train the very machines that might eventually replace them.

Technical Reliability and the "Simpsons" Test

Beyond the ethical debate lies a practical question: how effective are these simulated experts? Independent testing of the "Expert Review" tool has shown mixed results. While the AI can offer generic stylistic advice—such as Virginia Tufte’s simulated agent suggesting "vivid, varied sentence patterns"—it often fails to grasp nuance or detect cultural references.

In one notable instance, a user tested Grammarly’s plagiarism and AI-detection tools using a direct quote from a well-known episode of The Simpsons. The text included a nonsensical summation: "In conclusion, Libya is a land of contrasts." While the tool failed to identify the source as a popular television script, it did flag the phrase "a land of contrasts" as a sequence commonly produced by LLMs. This highlights a growing irony: users are utilizing AI to write, then utilizing "humanizer" tools to hide the AI’s fingerprints, while the AI detection tools themselves are increasingly confused by the "simulated human" output.

Broader Implications and Future Outlook

The transition of Grammarly into Superhuman represents a broader trend in the tech industry where "expert" labor is being commodified into software. This shift has several long-term implications for society and professional industries:

  1. The Erosion of Intellectual Property: If an AI can successfully mimic the voice of a specific author or historian, the commercial value of that individual’s unique style may be diluted. This raises questions about how estates will manage the "digital remains" of deceased public figures.
  2. Academic Integrity: As AI tools become more sophisticated in "humanizing" text and providing "expert" feedback, the distinction between a student’s original thought and machine-assisted output becomes nearly impossible to maintain. This may lead to a fundamental restructuring of how academic performance is assessed.
  3. The Devaluation of Expertise: By suggesting that a Large Language Model can provide the same value as a lifetime of scholarly study, tech companies risk framing experts as mere data sets rather than active participants in a discourse.
  4. Legal Precedents: The unauthorized use of names like King and Sagan in a commercial product could trigger new legislation regarding "digital twins" and personality rights, similar to the discussions currently surrounding AI-generated music and "deepfake" performances in the film industry.

As Superhuman continues to roll out its AI suite, the tension between technological efficiency and ethical responsibility is likely to intensify. While the company maintains that it is merely providing "inspiration," the scholars and writers whose names are being used see a more cynical exploitation of their life’s work. The outcome of this debate will likely define the boundaries of human-AI collaboration for the next decade, determining whether technology serves as a support for human expertise or a replacement for it.

More From Author

From San Diego Outcast to HBO Star: Danny Smiechowski’s Unconventional Journey on A24’s "Neighbors" Finale

The Secret Agent and the Intersection of Brazilian Cinema and Political Identity in the 2026 Oscar Race

Leave a Reply

Your email address will not be published. Required fields are marked *