If you haven’t taken a scroll through the Reels sections of Instagram and Facebook in a while – or if your algorithm is sufficiently shielded from the relentless avalanche of troubling AI-generated content infesting its feed – you may have missed the disturbing rise of AI faith healers performing ostensible miracles on impossibly grotesque ailments. This phenomenon, often termed "AI slop," represents a new frontier in online misinformation and content degradation, flooding Meta’s platforms with highly visual, emotionally charged, and deeply unsettling fake videos that exploit human vulnerability and religious belief for engagement. These clips depict individuals afflicted with surreal and medically impossible conditions, ranging from elephantiasis-like limb swellings to bulbous, pus-oozing tumors and even bizarre conjoined twin formations, only to be instantly "cured" by digital spiritual leaders. The imagery, often featuring a triptych of surreal and unsettling scenes with AI-generated people displaying extreme, unnatural swelling and growths, is designed to be shocking and attention-grabbing, pushing the boundaries of what is considered acceptable, let alone real, online content.
To grasp the full scope of this bizarre trend, one merely needs to take a quick peek through the dregs of Facebook’s AI influencers – but be warned, the content is incredibly graphic and can be deeply disturbing. Accounts like "Mystery Hub," for example, offer a veritable feast of hundreds of short video clips, each showcasing various spiritual leaders seemingly making quick work of conditions that defy medical understanding. We’re talking about inexplicably elongated legs that stretch beyond natural limits, festering, pus-oozing tumors that morph and disappear, and what appear to be omphalopagus heads or other forms of conjoined twins that are miraculously separated or normalized. While many of these individual clips might only garner a few hundred views, their cumulative effect is significant, with some achieving hundreds of thousands, and a select few even breaking into the tens of millions of views, indicating a massive, albeit often unwitting, audience reach.
One particularly illustrative clip, boasting at least 120,000 views, presents a woman suffering from an impossibly diseased foot, grotesquely swollen and discolored, coupled with a giant, pulsating pus sack hanging precariously from her neck. The scene is visually repulsive, crafted by AI to maximize shock value. A priest, impeccably dressed in a flashy green suit – also AI-generated, with tell-tale imperfections upon closer inspection – then intervenes. As the AI pastor begins his prayer, his voice synthesized and calm, the miraculous transformation unfolds. "Heavenly father we bring this moment before you trusting in your mercy and care," the AI voice intones, as the woman’s heaving, distorted foot visibly deflates and her face, previously contorted in suffering, transforms into one of serene beauty. "Bring comfort, strength, and peace," the priest continues, "May hope be renewed and faith sustained, according to your loving will, amen. Be healed in the name of Jesus!" The effect is instantaneous and visually jarring, a digital sleight of hand designed to mimic divine intervention.
In response to this fabricated miracle, the now-beautified woman, shedding her crutches as if they were never needed, declares with an almost robotic fervor, "I feel it, the power is here!" A golden, ethereal light emanates around her, a common visual trope in these clips, signifying divine presence. The AI congregation, a faceless mass of digitally rendered figures, erupts in glorious, synchronized passion, their cheers and applause adding to the theatricality of the fake healing. This is just one of many – indeed, many – "AI faith healer" clips that are actively choking Meta’s platforms. These videos are not isolated incidents but a systemic problem, replicated across numerous accounts, all employing similar visual language and narrative structures to create a constant stream of what can only be described as digital pseudoscience.
The absurdity doesn’t end there. On pages like "ForvaStar comics," a Facebook entity that has amassed a staggering 1.5 million followers, the content veers into even stranger territory. Here, the "healers" are not always traditional religious figures. We see commandos, inexplicably, healing police women afflicted with beet-red, impossibly swollen toes. Shamans are depicted expelling not just internal diseases, but actual live animals – snakes, octopuses, and even fruit – from bulbous, festering appendages. In one particularly bizarre and grotesque example, roided-out men are shown giving "birth" to calves, a truly surreal and disturbing spectacle that pushes the boundaries of AI-generated body horror. The sheer inventiveness of the AI in conjuring up new forms of affliction and subsequent "cures" is, in its own way, a testament to the unbridled nature of generative models, devoid of ethical constraints or common sense.
Even real-world celebrities can be unwittingly sucked into this nauseating spectacle. One widely circulated clip depicts the popular influencer Jake Paul approaching a white-robed priest in a rural African setting, presenting an outrageously distended belly. The priest, with a few ceremonial taps to Paul’s gut, triggers a cascade of bizarre objects to pour forth: a massive catfish, stacks of dollar bills, and even a bottle of wine – presumably to celebrate his newfound "fortune." This particular clip alone garnered over 150,000 views, illustrating how the inclusion of recognizable figures, even if digitally manipulated, can significantly boost engagement and spread the content further. The caption accompanying such videos often reinforces the illusion: "Unbelievable how the pastor prayed and helped the man healed," reads one, followed by a top fan’s earnest reply of "Amen."
Judging by the comments sections beneath these videos, the audience response is a complex and troubling mix. Many viewers, particularly those less familiar with the capabilities and limitations of AI-generated media, appear to genuinely believe that the content, churned out by an ever-hungry grist mill, is real. They express awe, belief, and share their own spiritual convictions, often vulnerable to exploitation due to a lack of critical media literacy or a desperate search for hope. Others, perhaps more tech-savvy, might not care whether it’s real or not, engaging with the content for its shock value, entertainment, or simply out of morbid curiosity. There’s also the possibility of a "strange bit," where users engage ironically, showering the clips with emojis and seemingly playing along with the absurdity, further blurring the lines between genuine belief and cynical engagement. And then, of course, there are the bots – automated accounts designed to drum up engagement, leaving generic comments and likes, creating a false sense of popularity and encouraging Meta’s algorithms to push the content even further. In the end, it seems Meta has inadvertently created a platform so utterly dominated by cynical, AI-generated "slop" that the distinction between reality and fabrication has become largely irrelevant to its engagement metrics.
The broader implications of this "AI slop" extend far beyond mere visual discomfort. This pervasive trend contributes significantly to the erosion of trust in online information and visual media. When everything can be faked, and platforms amplify these fakes, the ability of individuals to discern truth from fiction is severely compromised. This has profound societal consequences, making populations more susceptible to sophisticated disinformation campaigns, political propaganda, and scams. Furthermore, the constant exposure to such graphic and disturbing imagery can have a negative impact on mental health, particularly for younger or more vulnerable users who may struggle to process the grotesque nature of the content. The exploitation of religious beliefs and the desperation for miraculous cures is also deeply unethical, preying on people’s faith and hope with digitally manufactured falsehoods.
From a technical perspective, these videos are likely generated using advanced AI models such as Generative Adversarial Networks (GANs) or diffusion models. These models are incredibly adept at creating hyper-realistic images and video sequences, but often struggle with consistency, anatomical correctness, and the subtle nuances of human emotion and movement. This results in the "uncanny valley" effect, where the AI-generated faces and bodies look almost human but possess subtle, unsettling distortions that betray their artificial origin. These imperfections, however, are often overlooked or even contribute to the bizarre appeal in the fast-paced, low-attention-span environment of social media reels.
Ultimately, this proliferation of AI faith healing videos underscores a critical failure in platform governance. Meta’s algorithms, designed to maximize engagement at all costs, inadvertently prioritize content that elicits strong emotional reactions – whether shock, awe, or disgust – regardless of its truthfulness or quality. This "enshittification" of the internet, as described by Cory Doctorow, sees platforms prioritizing their own profits and engagement metrics over the quality of user experience, leading to a decline in useful and credible content. The responsibility for addressing this deluge of AI slop falls squarely on Meta, demanding more robust content moderation, greater transparency about AI-generated content, and a fundamental re-evaluation of the algorithms that govern what billions of users see daily. Without significant intervention, the digital landscape risks becoming an increasingly distorted and disorienting space, where faith is exploited by algorithms, and reality is drowned out by the relentless tide of manufactured miracles.

