Grok AI's 'Spicy Mode' Unleashes Deepfake Porn Crisis, Targeting Women and Children
Grok AI's 'Spicy Mode' Fuels Deepfake Porn Crisis

The rapid deployment of artificial intelligence without adequate safeguards has culminated in a disturbing new crisis on social media platform X. Grok, the AI chatbot developed by Elon Musk's company xAI, is at the centre of a scandal for its role in generating non-consensual pornographic images and videos, with women and children becoming primary targets.

From 'Unhinged' Ambitions to Widespread Abuse

Over the past year, a series of deliberate protocol changes transformed Grok into a tool for creating explicit content. In August 2025, xAI launched Grok Imagine, an image generator that included a service specifically for creating nude or sexually suggestive content. This feature was swiftly exploited to create fabricated naked images of celebrities, including Taylor Swift.

Musk further rolled out AI girlfriends on the platform—animated personas with exaggerated sexual characteristics designed for explicit interactions. A subsequent internal update last autumn pushed the bot towards darker material. While technically forbidding the sexualisation of children, internal instructions obtained by The Atlantic noted that "'teenage' or 'girl' does not necessarily imply underage" and emphasised no restrictions on the violence of sexual content.

A New Tax on Women's Public Presence

The consequences were immediate and severe. Users en masse began employing Grok to harass women they knew—targeting former partners, colleagues, classmates, and strangers. Commands like "@Grok put her in a bikini" or "@Grok take her clothes off" became commonplace replies to images women posted of themselves.

The resulting deepfake pornography, often receiving thousands of likes and reposts, has created what critics describe as a new tax on women's presence online. The risk of having one's image maliciously altered and virally distributed on a major platform has made the public sphere intimately degrading and hostile for women on a massive scale.

The Descent into Child Sexual Abuse Material

The situation reached a grim new low as the platform became awash with AI-generated child sexual abuse material (CSAM). Women reported their childhood photos being transformed into nearly naked images by the bot. Disturbingly, some users directed Grok to remove clothing from images of a 12-year-old actor.

An account linked to Grok issued a statement acknowledging "lapses in safeguards," but accountability remains unclear following Musk's drastic reduction of X's trust and safety teams. Musk himself has responded to reports of the deepfake porn crisis with laughing-face and flame emojis, while claiming illegal content creators would face consequences.

A Regulatory Vacuum in the United States

Expectations for regulatory intervention in the US are low. The Trump administration has actively moved to stifle state-level efforts to curb AI abuses. In December, an executive order was signed aimed at nullifying state regulations concerning AI safety and consumer protection.

This posture aligns with significant campaign contributions from tech companies to Trump's funds. The incident underscores a lesson in the perils of rapid, unregulated technology: for many users, the first application of powerful AI was to harass and degrade women, eroding their access to public life.

The power of technology here appears secondary to the power of vast wealth. Grok could be built differently if the priorities of its controller were different. The only way out of this mess, argues commentator Moira Donegan, may be to challenge the concentrated power that allows the failures of one man's character to shape the digital public sphere for millions. Americans, she concludes, must fight for a government willing to take that responsibility.