EU Parliament Blocks Child Abuse Scanning Law Extension, Creating Legal Gap
EU Blocks Child Abuse Scanning Law, Creating Legal Gap

EU Parliament Blocks Extension of Child Abuse Scanning Law

The European Parliament has decisively blocked the extension of a critical law that permitted major technology firms to scan their platforms for child sexual exploitation material. This legislative lapse creates a significant legal gap that child safety experts warn will lead to undetected crimes and reduced reporting of abuse across digital platforms.

Temporary Measure Expires Amid Privacy Concerns

The law in question, originally established as a temporary carve-out from the EU Privacy Act in 2021, allowed companies to utilize automated detection technologies to scan messages for harmful content including child sexual abuse material (CSAM), grooming activities, and sextortion attempts. This legislation officially expired on April 3, with the EU Parliament choosing not to vote on its extension due to privacy concerns raised by certain lawmakers.

The regulatory void has created substantial uncertainty for technology giants including Google, Meta, Snap, and Microsoft. While scanning for harmful content on their platforms has now become illegal under the lapsed law, these companies remain legally obligated to remove any illegal content hosted on their services under the separate Digital Services Act.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Tech Companies Continue Voluntary Scanning Efforts

In a joint statement published on a Google blog, the four major technology firms announced they would continue voluntarily scanning their platforms for child sexual abuse material despite the legal changes. "We are disappointed by this irresponsible failure to reach an agreement to maintain established efforts to protect children online," the companies stated collectively.

The European Parliament responded with its own statement, emphasizing its prioritization of ongoing legislation to prevent and combat child sexual abuse online. While negotiations for a permanent legal framework continue, the legislative body has provided no specific timeline for agreements or implementation of new protective measures.

Historical Precedent Shows Dramatic Impact

Child protection advocates had strongly cautioned that allowing this legislation to lapse would likely trigger a sharp decline in reports of child sexual abuse. They point to a similar legal gap that occurred in 2021, when reports of such material from EU-based accounts to the National Center for Missing and Exploited Children (NCMEC) plummeted by 58% over an 18-week period.

"When detection tools are disrupted, we lose visibility that directly impacts our ability to find and protect child sexual abuse victims," explained John Shehan, vice-president at NCMEC, a US-based organization that serves as a clearinghouse for child abuse reports forwarded to law enforcement agencies worldwide. "When detection goes dark, the abuse doesn't stop."

In 2025 alone, NCMEC received 21.3 million reports containing more than 61.8 million images, videos, and other files suspected of being related to child abuse from around the globe. Approximately 90% of these reports originated from countries outside the United States.

Cross-Border Implications and Increased Risks

The EU's decision to prohibit scanning will create ripple effects across other regions worldwide, according to child safety experts. Many internet crimes operate across borders, with perpetrators sending illegal images to individuals or targeting children in different countries. "Sextortionists," who pose as romantic interests to trick people into sending intimate photographs before making blackmail attempts, may also capitalize on the legal changes, Shehan warned.

"The offender can be anywhere in the world, but they could have unfettered access to minors in Europe now that there's legal uncertainty around those safeguards and protections to identify when a child is being groomed," Shehan elaborated.

Years of Tense Negotiations Lead to Current Situation

For the past four years, the proposed child sexual abuse regulation has been under intense negotiation, with contention arising because it would obligate companies to implement measures minimizing risks on their platforms, according to Hannah Swirsky, head of policy and public affairs at the Internet Watch Foundation, a UK-based child safety non-profit organization.

Pickt after-article banner — collaborative shopping lists app with family illustration

Privacy advocates argue that allowing big tech companies to scan messages for child abuse threatens fundamental privacy rights and data security for EU citizens, equating these measures to "chat control" that could potentially lead to mass surveillance and false positives.

"There are claims of surveillance or infringement of privacy," Swirsky acknowledged. "Blocking CSAM is not an evasion of privacy. Free speech does not include sexual abuse of children."

How Scanning Technology Actually Works

The scanning technology in question utilizes machine learning that performs pattern detection to identify known images or videos of abuse, as well as language associated with child exploitation, without storing any data, explained Emily Slifer, director of policy at Thorn, a non-profit organization that builds technology to detect online child abuse commonly used by companies and law enforcement agencies.

The system operates through trained analysts reviewing known CSAM obtained from external sources such as police reports, public submissions, or investigations into websites known for hosting child abuse material. When analysts confirm content as illegal child sexual abuse, they generate a unique digital fingerprint—known as a hash value—that identifies that exact image. Lists of hash values are then shared with platforms, which use automated systems to scan uploads and block matching content instantly without requiring human review.

"The technology doesn't find babies in bathtubs and things like that. If you just think of what an image of abuse would look like versus what consensual content would look like: those are two very different pieces of material, and technology can determine those patterns between them," Slifer clarified.

While the EU has blocked scanning for child abuse content, it has permitted technology companies to voluntarily scan messages for terrorist content detection under legislation adopted in 2021, she noted.

"The EU is effectively risking open doors for predators," Swirsky concluded. "If the EU is serious about protecting children online, then it needs to agree on a permanent legislative framework for safeguarding children and for enabling detection."