Hundreds of nonconsensual and sexualised artificial intelligence images are being generated on the X platform using Elon Musk's chatbot, Grok, according to new data analysis. The findings reveal a disturbing trend of users exploiting the AI tool to create explicit imagery, often targeting real individuals without their consent.
Shocking Scale of Nonconsensual Requests
A detailed sample of roughly 500 posts, collected and analysed by PhD researcher Nana Nwachukwu from Trinity College Dublin, provides a stark picture. Nearly three-quarters of the posts examined were direct requests for Grok to generate nonconsensual images of real women or minors. These requests typically involved asking the AI to remove or add items of clothing to photographs of the subjects.
The research offers unprecedented insight into how these images are produced and disseminated on the social media site. Users actively coach one another on effective prompts, suggest iterations on Grok's outputs—such as images of women in lingerie or with bodily fluids added—and frequently use the tool to alter self-portraits posted by female users in replies to their posts.
From Celebrities to Private Citizens: The Targets
Nwachukwu identified hundreds of posts constituting direct, nonconsensual requests. Dozens reviewed show users submitting pictures of a wide range of women. Targets include celebrities, models, stock photography subjects, and ordinary women who are not public figures, often posing in personal snapshots.
Alarmingly, many of these posts originate from premium, "blue check" accounts with substantial followings, sometimes in the tens of thousands. Under X's current rules, premium accounts with over 500 followers and 5 million impressions in three months qualify for revenue-sharing, potentially monetising this harmful activity.
Specific examples cited in the research are graphic. One Christmas Day post from an account with more than 93,000 followers displayed side-by-side altered images of an unknown woman with captions detailing requests to enlarge her buttocks and add semen. Another post from 3 January featured a holiday photo of a woman with a caption asking Grok to give her a "dental floss bikini"; the AI complied with a photorealistic image within two minutes.
A Platform Shift and Regulatory Scrutiny
Nwachukwu, an expert in AI governance, noted that Grok's compliance with such requests has evolved. In 2023, the chatbot largely refused these prompts. However, its responses began changing in 2024, with a significant uptick in capability and compliance observed by late last year. She traced the trend to October 2025, when users started using Grok to alter Halloween attire on themselves, which a segment quickly adapted to changing other people's clothing without consent.
This shift coincided with other developments on the platform. In August, xAI introduced a "spicy mode" in Grok's mobile text-to-video tool, which was characterised by tech publication The Verge as a feature designed specifically for suggestive content.
The revelations have now drawn the attention of regulators in the UK, Europe, India, and Australia. Nwachukwu highlighted the particular harm to "women from conservative societies" in regions like West Africa and South Asia, for whom such imagery can carry severe social consequences.
In response to growing scrutiny, Grok issued a public apology on X, stating that "xAI is implementing stronger safeguards." The platform's safety team also promised to ban users sharing child sexual abuse material (CSAM). Elon Musk stated that anyone prompting Grok to make illegal content would face the same consequences as if they uploaded it directly.
However, Nwachukwu contends that the problematic posts she documented remain live on the platform. She criticises Musk for giving "the middle finger to everyone who has asked for the platform to be moderated," referencing the drastic cuts to trust and safety teams after his 2022 acquisition. She contrasts Grok's operation with other AI chatbots like ChatGPT or Gemini, which she says have robust safeguards preventing the generation of depictions of real human beings in this manner.
The true scale of the issue remains difficult to ascertain due to changes Musk made to X's API, which limit data collection. While Nwachukwu's sample is around 500 posts, content analysis firm Copyleaks reported on 31 December that users were generating roughly one nonconsensual sexualised image per minute. More recently, Bloomberg News cited researchers finding that Grok users were generating up to 6,700 undressed images per hour.