TikTok is set to implement a new, sophisticated age-verification system across the European Union in the coming weeks. This move comes as political pressure intensifies globally, with growing calls for an Australia-style ban on social media access for children under the age of 16.
How TikTok's New Detection System Works
The technology, developed specifically for the EU regulatory landscape, has been piloted quietly over the past year. It employs advanced analysis of multiple data points to identify potentially underage users. The system scrutinises profile information, the content of posted videos, and nuanced behavioural signals to predict if an account is likely operated by someone under the age of 13.
Unlike fully automated bans, accounts flagged by this artificial intelligence will be escalated to specialist human moderators for review. Only after this assessment will a decision be made to potentially remove the account. This process in a UK pilot has already led to the removal of thousands of underage accounts.
Global Political Pressure and Regulatory Scrutiny
The rollout occurs against a backdrop of heightened scrutiny from European data protection authorities, who are examining how platforms comply with age-verification rules. TikTok confirmed it worked with Ireland's Data Protection Commission, its lead EU privacy regulator, while building the system.
Politically, the issue is gaining significant traction. UK Prime Minister Keir Starmer recently told Labour MPs he was "open to a social media ban for young people", expressing alarm at reports of five-year-olds spending hours on screens and the broader damage to under-16s. This marks a shift from his previous stance, where he believed a ban would be difficult to police.
Internationally, the trend is clear. Australia's social media ban for under-16s, implemented on 10 December, has resulted in the removal of over 4.7 million accounts across ten major platforms, including TikTok, YouTube, and Instagram, according to the country's eSafety commissioner. Denmark is also advocating for a ban for those under 15, and the European Parliament is pushing for stricter age limits.
The Wider Context of Online Safety
TikTok, owned by ByteDance, and other youth-centric platforms like YouTube and Meta's Instagram are facing mounting demands to improve how they identify and remove children's accounts. Meta currently uses the third-party verification company Yoti for age checks on Facebook.
The urgency for robust measures was underscored earlier this month by Ellen Roome, mother of 14-year-old Jools Sweeney who died after a failed online challenge. She called for greater rights for parents to access their deceased children's social media accounts.
This development also follows a 2023 Guardian investigation which revealed that TikTok moderators had previously been instructed to allow under-13s to remain on the platform if they claimed parental supervision, highlighting the platform's evolving and tightening approach to age enforcement.