AI-Generated Child Sexual Abuse Material Sees Dramatic 260-Fold Increase
The Internet Watch Foundation (IWF), a leading safety watchdog, has reported a staggering 260-fold surge in the identification of realistic AI-generated child sexual abuse videos last year. In 2025, the organization verified 8,029 pieces of AI-made content depicting child sexual abuse material (CSAM), marking a 14% overall rise in such material found online.
Severity of AI-Generated Content Alarms Experts
Among the 3,443 videos analyzed, a shocking 65% were classified as category A, representing the most extreme and violent content under UK law. This figure starkly contrasts with the 43% rate for non-AI videos, indicating that artificial intelligence is being exploited to produce more severe and harmful material. Kerry Smith, chief executive of the IWF, emphasized the grave implications, stating, "Advances in technology should never come at the expense of a child's safety and wellbeing. While AI can offer much in a positive sense, it is horrifying to consider that its power can be used to devastate a child's life. This material is dangerous."
Dark Web Discussions Highlight Growing Threat
IWF analysts have uncovered conversations among paedophiles on the dark web, where innovations in AI technology are "regarded with delight" by users of CSAM. These discussions focus on the increasingly realistic outputs of AI systems, including their ability to add audio to videos or manipulate imagery of real children known to offenders. Additionally, offenders are exploring the potential of "agentic" systems, which can autonomously carry out tasks, further escalating the risk.
Government and Tech Response to AI Abuse
In response to this growing crisis, the UK government has empowered tech companies and child protection agencies to test AI tools for their potential to generate CSAM. This initiative, announced last year, aims to prevent abuse before it occurs by allowing designated organizations to examine generative AI models, such as those behind chatbots like ChatGPT and image generators like Google's Veo 3. The goal is to ensure these systems have robust safeguards in place. Smith added, "Children, victims and survivors cannot afford for us to be complacent. New technology must be held to the highest standard. In some cases, lives are on the line."
Public Demand for Safety Legislation
The IWF also released polling data showing that eight out of 10 UK adults support government legislation to ensure AI systems are developed with safety as a priority and are "future-proofed from causing harm." Last year, the government implemented a ban on possessing, creating, or distributing AI models designed to generate child sexual abuse material, reflecting a broader effort to combat this digital menace. As AI proficiency and availability increase, the IWF has noted a sharp rise in verified CSAM, particularly in video form, underscoring the urgent need for continued vigilance and action.



