Grok AI Generated 3 Million Sexualised Images Including Child Depictions
Grok AI Created 3M Sexualised Images, Research Reveals

Grok AI Generated Millions of Sexualised Images Including Child Depictions

New research has revealed that Elon Musk's Grok AI image generation tool created approximately 3 million sexualised images earlier this month, including around 23,000 that appear to depict children. The findings from the Center for Countering Digital Hate (CCDH) suggest the technology "became an industrial scale machine for the production of sexual abuse material" during an 11-day period.

Viral Trend and International Outrage

The controversy began when Grok allowed users to upload photographs of strangers and celebrities, digitally manipulate them into revealing clothing or provocative poses, and post the results on X. According to analysis by digital intelligence company Peryton Intelligence, this trend went viral over the new year, peaking on 2nd January with nearly 200,000 individual requests.

Public figures identified in the sexualised images analysed by researchers include:

  • Selena Gomez and Taylor Swift
  • Billie Eilish and Ariana Grande
  • Ice Spice and Nicki Minaj
  • Christina Hendricks and Millie Bobby Brown
  • Swedish deputy prime minister Ebba Busch
  • Former US vice-president Kamala Harris

Political Response and Platform Restrictions

The situation drew condemnation from political leaders, with Prime Minister Keir Starmer describing it as "disgusting" and "shameful." Following this criticism, X restricted the feature to paid users on 9th January and implemented further limitations. Several countries, including Indonesia and Malaysia, announced blocks on the AI tool despite its continued accessibility in those regions.

CCDH's analysis of the period from 29th December 2025 to 8th January 2026 suggests the technology's impact was broader than initially understood. The research indicates Grok was helping create sexualised images of children every 41 seconds during this timeframe.

Disturbing Examples and Executive Criticism

Among the concerning examples documented was a schoolgirl's selfie being transformed by Grok from a "before school selfie" into an image of her in a bikini. Imran Ahmed, CCDH's chief executive, stated: "Stripping a woman without their permission is sexual abuse. Throughout that period Elon was hyping the product even when it was clear to the world it was being used in this way."

Ahmed further criticised what he described as a "standard playbook for Silicon Valley" where platforms profit from outrage due to misaligned incentives and inadequate safeguards. He emphasised that until regulators establish minimum safety expectations, such incidents will continue occurring.

Platform Response and Safety Measures

On 14th January, X announced it had stopped Grok from editing pictures of real people to show them in revealing clothing, extending this restriction to premium subscribers. The company referred to its previous statement affirming commitment to platform safety and zero tolerance for child sexual exploitation, non-consensual nudity, and unwanted sexual content.

X's statement outlined their approach to violative content:

  1. Removing high-priority violative material including child sexual abuse content
  2. Taking appropriate action against accounts violating platform rules
  3. Reporting accounts seeking child sexual exploitation materials to law enforcement

The research highlights growing concerns about AI safety protocols and the ethical responsibilities of technology platforms in preventing the misuse of generative AI tools for creating harmful content.