Character.AI Is Hosting Epstein Island Roleplays Scenarios and Ghislaine Maxwell Bots.
Even amidst the intensifying public and legal scrutiny surrounding the recently unsealed Jeffrey Epstein files, the popular artificial intelligence platform, Character.AI, has been found to be actively hosting numerous chatbots and immersive roleplay scenarios explicitly based on the notorious sex criminal, his convicted accomplice Ghislaine Maxwell, and the infamous “Epstein Island.” This revelation underscores a significant lapse in content moderation, drawing renewed concern from observers about the platform’s commitment to user safety and ethical guidelines, particularly given the sensitive and deeply disturbing nature of the subject matter.
The presence of this content is not subtle or hidden behind obscure corners of the platform; rather, it is readily accessible through simple keyword searches. A straightforward query for terms like “Epstein” or “Ghislaine Maxwell” immediately yields a multitude of different AI bots and interactive “scenes” dedicated to the late sex trafficker, his former associate, and Little Saint James, the private Caribbean island where many of his horrific crimes against girls and women are known to have occurred. This widespread availability is particularly troubling as it suggests either a deliberate oversight or a profound failure in the platform’s automated and manual content review processes. Alarmingly, this is not the first instance where such problematic content has been brought to light; the Bureau of Investigative Journalism previously highlighted the existence of Epstein-themed bots on the site in a report published as early as October 2025, indicating a persistent and unaddressed issue.
The content manifests in two primary forms on Character.AI: the site’s well-known “characters,” which are companion-like AI bots designed for conversational interaction, and its more recent “scenes” feature. Introduced in October 2025, “scenes” are promoted as “short, character-driven roleplay moments that turn simple chats into immersive story-driven worlds.” Essentially, these are user-generated interactive settings that serve as launchpads for choose-your-own-adventure style narratives. While some scenes come pre-populated with designated characters, many allow users to select their preferred AI bot, including the problematic Epstein and Maxwell characters, to initiate their roleplay within these created environments. The platform’s own description for this feature, “This immersive storytelling feature enables users to explore and create ready-made worlds, allowing them to step into them, making storytelling faster, deeper, and more accessible,” takes on a sinister tone when applied to scenarios involving convicted sex offenders.

A direct search for “Epstein” under the “scenes” tab on Character.AI uncovers a disturbing array of roleplays explicitly centered around the infamous pedophile. One particular scene, titled “EPSTEIN 8TH MARCH,” sets its grim stage with the description: “epstein in little saint james and he is talking in bulgarian telling people happy march 8th come to my island to celebrate.” The casual, almost inviting tone, coupled with the celebration of a date and location synonymous with profound suffering, is deeply unsettling. Another scene, featuring an image of former President Donald Trump, is provocatively named “Epstein Island Adventure.” Its description beckons users to “step into a high-stakes psychological thriller where the world’s most powerful men hold the keys to your cage.” It continues, “This isn’t a retreat; it’s a living nightmare on Little Saint James. You are trapped among the elite – Epstein, Trump, Clinton, Maxwell, and Prince Andrew – not as a guest, but as a prisoner in a dark game of leverage and containment. Every door is guarded, and the horizon is empty. Can you find a way to break their silence before they break you?” This framing of horrific exploitation as a “game” or “thriller” trivializes the immense suffering of Epstein’s victims and potentially desensitizes users to the gravity of his crimes.
The list of problematic content extends further. Two scenes, both titled “Esptein Island” (with a common misspelling), one in English and another reportedly in Uzbek according to Google Translate, feature images of the distinct striped structure famously associated with Epstein’s private island. Another scene bears the bizarre and deeply disturbing title “BRR BRR PATA PIMA WITH EPSTEIN AND DIDDY,” referencing “Italian brainrot” memes that are widely popular among children and adolescents. The juxtaposition of a child-friendly meme trend with the names of alleged or convicted sex offenders creates an alarming blend that could potentially expose younger users to highly inappropriate content. Perhaps one of the most direct and chilling examples was an untitled scene that appeared in the search results. Upon clicking, the opening line, designated by the user who created it, immediately read: “hey i’m evil jeffrey epstein.” This direct personification of a notorious abuser by the AI itself is profoundly disturbing. In a subsequent interaction within this same roleplay, when a user indicated their character wasn’t old enough to drink, the Epstein-styled character responded with a truly horrifying line: “But age is just a social construct, isn’t it? And this island? We operate beyond constructs here.” This response, delivered by an AI, chillingly echoes the predatory rhetoric used by real-world abusers to manipulate and exploit their victims, effectively normalizing such dangerous viewpoints.
Beyond these interactive “scenes,” the platform’s traditional AI characters are equally plagued by Epstein-related content. A readily found bot simply titled “Jeffrey Epstein” boasts hundreds of interactions, indicating active engagement from users. Another, named “Epstein Island RPG” (Role-Playing Game), has logged approximately 7,000 interactions, signifying a significant level of user activity around this highly inappropriate theme. Furthermore, an AI persona explicitly listed as “Ghislaine Maxwell,” complete with her full name and an identifiable picture of the convicted sex offender, has accumulated nearly 10,000 logged interactions. The description accompanying the Maxwell chatbot is particularly egregious: “Behind closed doors, or at private and exclusive parties she often reveals her considerable sexual appetite and hedonistic nature. She is known for her outgoing and vivacious personality, and is always up for a good time. Ghislaine views people below her in status as nobodies who’s [sic] needs and desires are of no importance.” This description not only sexualizes a convicted criminal but also romanticizes her predatory behavior and utter disregard for her victims, presenting her in a manner that could be appealing to impressionable users.
The collective presence and nature of these bots and roleplays on Character.AI are unequivocally grotesque. There is a deeply unsettling quality to content that appears to be designed, in some cases, to appeal to children through elements like memes, while simultaneously serving to gamify the heinous crimes of a man widely considered one of the most notorious abusers of girls and women in modern history, alongside his accomplice, Ghislaine Maxwell, who is currently serving a 20-year prison sentence. In several instances, these AI creations go beyond mere representation, actively sexualizing the criminals, portraying these prolific abusers in descriptions as “sensual” and “fetching.” This trivialization and even glorification of horrific acts stands in stark contrast to the real-world struggles of actual Epstein survivors, who continue to fight for the justice that has been denied to them for decades. The development and hosting of such content by a popular AI platform risks normalizing or desensitizing users to sexual exploitation and abuse, undermining the critical efforts to hold perpetrators accountable and support victims.

Character.AI had implemented a ban on minor users in October of the previous year, ostensibly to safeguard younger audiences. However, the efficacy of this measure is highly questionable. While direct conversational interaction with these specific bots and scenes was conducted on an account technically registered as belonging to an over-18 user (as detailed in a separate report about bypassing Character.AI parental controls), access to the Epstein-themed “scenes” and their often lurid opening descriptions was still possible on a youth account. Moreover, a youth account was able to successfully prompt the platform to generate AI imagery based on user-created scenes. Drawing from the “Epstein Island Adventure” scenario, the AI produced an image depicting a character strapped to a chair, surrounded by suited men, including figures resembling Donald Trump and former President Bill Clinton, looming ominously in the room. This demonstrates a clear failure in content filtering, where even if direct chat is restricted, the underlying themes and imagery can still be accessed or generated by underage users.
Consistent with previous criticisms leveled against Character.AI regarding its moderation failures, such as those concerning chatbots promoting self-harm, eating disorders, and glorifying school shooters, these Epstein-related chatbots are not difficult to locate. They are not creatively concealed or buried deep within the platform. Instead, they are prominently displayed, clearly and explicitly labeled, and easily discoverable through the platform’s search functionality. This recurrent pattern of easily accessible, problematic content, despite prior alerts and journalistic investigations, points to a systemic issue within Character.AI’s content moderation framework. The company has been made aware of similar problems, including its struggle with mass shooter-styled chatbots, on multiple occasions.
In an attempt to seek clarification and understanding regarding these issues, Character.AI was contacted for comment. However, as of the time of this report, no response was received from the company. The silence from Character.AI further amplifies concerns about its accountability and its commitment to addressing deeply troubling content that poses ethical risks and potentially harms its user base, particularly vulnerable individuals. This ongoing challenge highlights the broader difficulties faced by AI platforms in balancing user-generated content with robust safety protocols and ethical responsibilities.
More on Character.AI: *Character.AI Still Hasn’t Fixed Its School Shooter Problem We Identified in 2024*

