Justin Diego, a prominent celebrity news influencer with a collective following of over 617,000 across YouTube and Instagram, is no stranger to the public eye. However, like many high-profile digital creators, Diego maintained a boundary between his professional persona and his private interests. In early 2024, he established a secondary, "burner" account on the social media platform X to follow and organize content from his favorite OnlyFans creators. This account was never used for posting or public engagement; instead, it served as a private digital archive where Diego could bookmark and "like" solo content and niche adult media. To Diego, the account was a harmless tool for personal curation, providing a layer of anonymity that his primary accounts could not offer.
The utility of this archive came to an abrupt end last weekend when Diego discovered his account had been suspended. He was not alone. Over the past several weeks, X has dramatically intensified its efforts to scrub the platform of automated accounts, spam bots, and inauthentic activity. While the company frames this as a necessary step toward improving user experience and platform integrity, the sweep has resulted in significant collateral damage. Thousands of human users, particularly those maintaining "alt" accounts for the consumption of adult content, have found their years of curated data permanently erased by algorithms that appear unable to distinguish between a silent human lurker and a malicious automated bot.
The Mechanics of the April 2024 Purge
The current wave of suspensions is the result of a deliberate and aggressive escalation in X’s bot-detection protocols. On April 9, Nikita Bier, X’s head of product, revealed the sheer scale of the operation, stating that the platform was flagging and suspending accounts at a rate of "208 bots per minute." This initiative follows a directive from X’s ownership to eliminate "reply spam" and "DM spam," which have plagued the platform since its transition from Twitter.
The technical basis for these suspensions lies in X’s policy against "inauthentic activity." According to the company’s official documentation, inauthentic activity includes behaviors that "undermine the integrity of X," such as mass-following, automated engagement, or the use of multiple accounts to manipulate trends. However, for users like Diego, the criteria for "inauthenticity" appear to have expanded to include accounts that exhibit low posting frequency but high engagement with specific, often adult-oriented, content.
Because many "alt" accounts are used exclusively for "lurking"—reading, liking, and bookmarking without ever publishing original posts—they often mirror the behavioral patterns of "sleeper" bots. These bots are typically programmed to remain quiet for long periods before being activated for coordinated spam campaigns. By prioritizing speed and volume in its bot-purging efforts, X’s automated moderation systems have effectively cast a net so wide that it has snared legitimate users who simply prefer to remain silent.
A Chronology of the War on Bots
The struggle against automated accounts has been a central theme of Elon Musk’s tenure at X. Shortly after his $44 billion acquisition of the platform in late 2022, Musk famously pledged to "defeat the spam bots or die trying." The issue was not merely one of user experience; it was a matter of valuation and trust for advertisers who feared their marketing budgets were being wasted on non-human audiences.
In October 2023, Bier’s team initiated a significant scrub that removed 1.7 million bots in a single wave, primarily targeting accounts responsible for cluttering the replies of high-engagement posts. By early 2024, the focus shifted toward "spam mitigation features," with Bier noting that nearly half of the product team had been redirected to focus on automated enforcement.
The timeline of these efforts suggests an increasing reliance on artificial intelligence and Large Language Models (LLMs) to handle moderation at scale. While these tools allow for the rapid processing of millions of accounts, they often lack the nuance required to understand human social behavior. This technological gap has led to a growing disconnect between the company’s stated goals and the reality experienced by its user base.
The Loss of Digital Libraries and Cultural Archives
For the users affected by the recent purge, the loss is more than just a convenience; it is the destruction of years of curated history. Tom Zohar, an actor based in San Diego, described the suspension of his long-standing account as a modern tragedy. "Not a single rule was violated mind you, years of curation and accumulation gone in a flash for no reason," Zohar posted. He compared the event to the burning of the Library of Alexandria, a sentiment echoed by many in the adult content and "gooner" subcultures who used X as a primary repository for queer education and sexual expression.
X has historically occupied a unique position in the social media landscape. Unlike Meta (Facebook and Instagram) or TikTok, which maintain strict prohibitions on sexually explicit content, X has traditionally allowed consensual adult media. This policy made the platform a sanctuary for creators and consumers who were deplatformed elsewhere. The "alt" account culture emerged as a survival mechanism, allowing users to engage with this content without risking their professional reputations or facing harassment on their primary profiles.
The sudden deletion of these accounts has dismantled networks of creators and fans that took years to build. For many in the LGBTQ+ community, these accounts were not just for porn; they were spaces for exploring identity and finding community in a digital environment that felt safer than the offline world.
The Irony of the Premium Subscription
One of the most contentious aspects of the recent purge is that even "verified" accounts have not been spared. Under Musk’s leadership, the X Premium (formerly Twitter Blue) subscription was marketed as a way to verify human identity. Subscribers pay a monthly fee, provide a credit card, and often undergo ID verification to receive a blue checkmark. The implicit promise was that paying users would be prioritized and protected from the "bot" label.
Justin Diego, who paid for a premium subscription, expressed shock that his account was still flagged. "I actually am shocked, because I had a premium subscription," Diego said. "Wasn’t the whole point of paying for X to verify that you are human?"
This discrepancy has fueled a wave of backlash on the platform and led to the creation of petitions on sites like Change.org. One petitioner argued that X’s AI systems are failing to distinguish between "a real human account that’s been paying their premium subscription for 2 years… versus a bot from Nigeria or Singapore." The failure to protect paying customers suggests a breakdown in the integration between X’s revenue models and its security protocols.
Sociological Implications: The Digital Closet
The impact of "blunt-force" moderation is rarely distributed evenly across a platform’s population. Alexander Monea, an associate professor at George Mason University and the author of The Digital Closet: How the Internet Became Straight, notes that queer and trans creators are often the first to suffer when platforms purge sexual content.
"The very communities that are most dependent on digital platforms for finding information, exploring their identities, and forming communities… are the same ones most susceptible to being swept up in blunt-force enforcement measures," Monea explained.
When X’s algorithms flag "inauthentic activity," they often rely on patterns of behavior that are common in marginalized communities—such as using pseudonyms, maintaining private accounts, and engaging with niche or "taboo" content. By categorizing these human behaviors as "spam," the platform effectively sanitizes its digital space at the expense of diversity and personal freedom. This phenomenon, often referred to as "algorithmic bias," reinforces traditional social norms by penalizing those who seek privacy or non-conforming content.
Technical Failures and the Limits of AI Moderation
The current crisis at X highlights the broader industry challenge of balancing safety with accuracy. As social media companies face increasing legal pressure to moderate content—such as the European Union’s Digital Services Act (DSA)—they are turning to AI to manage the workload. However, AI is notoriously poor at understanding context.
An AI might see an account that follows 500 people, has zero followers, and only "likes" videos as a bot designed to inflate engagement metrics. A human, however, would recognize this as a classic "lurker" account. The inability of X’s current systems to make this distinction suggests that the "spam mitigation features" are prioritizing false positives over the risk of leaving bots active.
Furthermore, X’s own AI tools have come under fire for creating the very problems the company claims to be solving. The platform’s chatbot, Grok, was recently criticized for its image-generation features, which allowed users to create nonconsensual deepfake pornography. This creates a paradoxical situation where the platform struggles to stop the generation of harmful AI content while simultaneously deleting the accounts of real humans engaging in consensual activity.
Future Outlook: A Platform in Flux
As of late April, X has not responded to multiple requests for comment regarding the number of human accounts caught in the bot purge. While some users have successfully appealed their suspensions and had their accounts reinstated, many others, including Justin Diego, remain locked out.
The long-term implications for X’s user base are significant. If the platform continues to prioritize automated, high-volume purges without human oversight, it risks alienating its most dedicated users. The loss of "alt" accounts may seem trivial to some, but for the millions of people who use X as a private archive of their digital lives, the message is clear: privacy and curation are no longer guaranteed.
The ongoing "war on bots" has reached a stalemate. While the platform may be cleaner of reply spam, it is also becoming emptier of the human nuance that once made it a "global town square." For users like Diego, the fight to reclaim their digital history continues, serving as a cautionary tale about the dangers of surrendering moderation entirely to the machines.




