The digital landscape of American secondary education is currently grappling with a sophisticated and malicious evolution of the traditional high school prank: the emergence of "slander pages" powered by generative artificial intelligence. These social media accounts, primarily found on TikTok and Instagram, utilize advanced image-to-video synthesis tools to create hyper-realistic yet defamatory content targeting school faculty and administrators. In a recent and prominent example, a video surfaced on an Instagram account titled @thewyliefiles, featuring a former school superintendent from the Wylie Independent School District in Texas. The clip utilizes AI to depict the official lip-syncing a popular ballad, but the performance is not solo; he is joined by AI-generated avatars of Israeli Prime Minister Benjamin Netanyahu and the deceased sex offender Jeffrey Epstein. The video, which has garnered over 107,000 likes, represents a growing trend where students weaponize accessible AI technology to undermine the reputations of educators for the sake of viral engagement.
The Technological Architecture of Digital Defamation
The primary engine behind this new wave of content is Viggle AI, a generative video platform that has seen a meteoric rise in popularity. Viggle AI allows users to superimpose a static image of a person’s face onto a reference video, effectively animating the subject in any scenario. According to data released by the platform, Viggle AI boasted over 40 million users as of February 2025. While the company markets its tool as a creative outlet for meme-making, digital safety experts warn of its potential for harm. The Global Network on Extremism and Technology (GNET), an academic research arm of King’s College London, recently characterized Viggle AI as a "new frontier in the creation of spontaneous extremist propaganda."
The ease of use provided by such tools has lowered the barrier to entry for creating deepfake content. Previously, creating a convincing video manipulation required significant technical expertise and computing power. Today, a student with a smartphone and a photograph of a teacher can generate a video of that educator participating in illicit activities or performing humiliating acts within minutes. In one particularly egregious instance reported on TikTok, a teacher’s face was superimposed onto a video of an individual experiencing a seizure in a bathroom. The video was captioned with the phrase "Take fent or be useless," falsely labeling a medical emergency as a fentanyl-induced high. Such content goes beyond mere mockery, entering the realm of severe character assassination.
The Intersection of "Manosphere" Slang and Extremist Symbology
The "slander page" phenomenon is not merely a technological issue but a cultural one, deeply rooted in the unsavory corners of the internet. The captions and overlays on these videos frequently employ "looksmaxxing" lingo—a dialect originating from incel (involuntary celibate) forums and the "manosphere." Terms like "mogging" (dominating others through physical attractiveness) and "sub5" (a derogatory term for those deemed subhumanly ugly) are used to rank and humiliate teachers.
Furthermore, these posts often incorporate symbols associated with neo-Nazi occultism and the "alt-right." One recurring motif involves the fictional realm of "Agartha," a legendary Kingdom at the Earth’s core that has been repurposed by white supremacist groups as a "pure" ancestral home. In student-led "slander" edits, teachers are judged on their worthiness to enter Agartha. Those "accepted" are depicted with glowing white eyes, while those "denied" are given red eyes or depicted in hellish landscapes. The integration of such specific, radicalized imagery suggests that students are not only consuming mainstream social media but are also being socialized in fringe digital environments where extremist aesthetics are normalized as "edgy" humor.
Chronology of Escalation: From Local Pranks to Global Virality
The trajectory of the account @crandall.kirkinator, which targeted faculty at Crandall High School in Texas, illustrates how quickly these localized harassment campaigns can spin out of control. What began as an internal school joke rapidly "broke containment," a term used to describe when niche content reaches a general audience.
- Late 2024: The account begins posting AI-generated memes of Crandall High School teachers, utilizing Viggle AI and looksmaxxing terminology.
- January 2025: The content is amplified by major TikTok influencers with hundreds of thousands of followers who have no connection to the school district. These influencers act out skits based on the "slander," further spreading the names and faces of the targeted teachers to a global audience.
- Late January 2025: Targeted teachers report being harassed via spam calls and emails from strangers across the country.
- January 31, 2025: The administrator of @crandall.kirkinator wipes the account and posts a statement claiming the account was "created as a joke" and was never meant to escalate to real-world harassment.
- February 2025: Despite the apology, the account briefly resumes posting before being permanently deleted following mounting pressure and potential legal threats.
This timeline highlights the "viral feedback loop" that characterizes modern cyberbullying. Once a teacher’s likeness is entered into the algorithmic stream of TikTok or Instagram, the original creator loses control over the narrative, leaving the victim vulnerable to a decentralized mob of anonymous internet users.
Official Responses and the Challenge of Moderation
The response from educational institutions and social media platforms has been a mixture of disciplinary warnings and policy enforcement. April Cunningham, the Chief Communications Officer for the Wylie Independent School District, issued a formal statement emphasizing that the exploration of AI tools must not come at the expense of educator reputations. "If we identify the student(s) responsible, they will face disciplinary action and possible legal consequences," Cunningham stated. She also noted that despite the serious allegations of predatory behavior leveled against teachers in the AI videos, no formal reports had been made through the district’s official anonymous tip lines, suggesting the "allegations" were entirely fabricated for the memes.
Meta, the parent company of Instagram, and TikTok have both asserted that they have removed content associated with these slander pages for violating policies on bullying and harassment. A spokesperson for Meta, Tracy Clayton, confirmed that the platform reviewed and removed specific videos after they were brought to their attention. TikTok similarly stated that it employs automated systems to catch such content, though the sheer volume of "slander" posts suggests that these filters are frequently bypassed by subtle variations in hashtags or terminology.
The administrative burden on schools is significant. Identifying the anonymous owners of these accounts—who often brag about their anonymity while sitting in the very classrooms of the teachers they are mocking—requires digital forensic capabilities that many school districts lack.
Ethical Analysis: The "Deep Technological Disconnect"
Sociologists and media researchers point to a profound shift in how the younger generation perceives identity and privacy. İdil Galip, a researcher at the University of Amsterdam, suggests that students today are "socialized through the internet," viewing human faces—including those of their mentors—as public domain assets to be manipulated. In this environment, a teacher’s face is no longer a private identity but a "node of popularity" that can be hooked onto trending topics like the "Epstein files" or geopolitical conflicts to garner views.
Geert Lovink, Director of the Institute of Network Cultures, describes this as a "deep technological disconnect." Students often view these AI-generated attacks as harmless fun, failing to grasp the long-term professional and psychological damage inflicted on the victims. Because the content is digital and "satirical," the perpetrators often feel insulated from the moral weight of their actions. This disconnect is exacerbated by the gamification of social media, where the "Gem alarm"—a comment indicating high-quality content—becomes a more valuable social currency than the respect of an educator.
Broader Implications for the Future of Education
The rise of AI slander pages signals a transformative challenge for the American education system. Beyond the immediate need for stricter cyberbullying policies, there is an urgent requirement for "AI literacy" programs that teach students the ethical implications of synthetic media.
Furthermore, the trend poses a threat to teacher retention. In an era where educators are already facing burnout and low wages, the added risk of having one’s likeness morphed into "extremist propaganda" or falsely associated with criminal behavior may drive many out of the profession. The legal framework surrounding these incidents remains murky; while schools can discipline students, the protection offered by Section 230 of the Communications Decency Act often shields platforms from liability, leaving individual teachers to pursue costly and difficult defamation suits against minors.
As generative AI continues to evolve, the boundary between "satire" and "slander" will likely become even more blurred. The cases in Wylie and Crandall serve as a harbinger of a new era of digital conflict, where the classroom is no longer a sanctuary from the most volatile and radicalized elements of internet culture. The task for administrators, parents, and tech companies moving forward is to establish a digital environment where innovation does not provide a mandate for the systematic destruction of individual reputations.




