Regulators Demand Action from Tech Giants on Child Safety
Tech companies have been issued a stern warning to enhance protections for young users online, following a parliamentary vote that rejected a blanket social media ban for individuals under 16. The Information Commissioner's Office (ICO) and Ofcom, the UK's communications regulator, have dispatched letters to multiple platforms, urging them to implement more robust safety measures for children.
Deadline Set for Age Verification and Grooming Prevention
Ofcom has specifically targeted Facebook, Instagram, Roblox, Snapchat, TikTok, and YouTube, giving them until the end of April to detail their strategies for improving age verification systems and preventing online grooming. The platforms are also required to outline their efforts in combating harmful algorithms and to cease product testing on children, with Ofcom emphasizing an "end to product testing on children."
Similarly, the ICO has contacted TikTok, Snapchat, Facebook, Instagram, YouTube, and X (formerly Twitter), inquiring about how their age check policies are safeguarding minors. This regulatory push comes in the wake of a Conservative-led initiative to ban under-16s from social media, which was defeated in the House of Commons by a vote of 307 to 173.
Research Highlights Enforcement Failures
Ofcom's research reveals significant gaps in enforcement, with 72% of children aged eight to 12 accessing sites and apps that are prohibited for their age group, despite minimum age policies of 13. Dame Melanie Dawes, chief executive of Ofcom, criticized major tech firms for "failing to put children's safety at the heart of their products." She highlighted a disconnect between private assurances and public actions, stating, "Without the right protections, like effective age checks, children have been routinely exposed to risks they didn't choose, on services they can't realistically avoid."
Paul Arnold, chief executive of the ICO, echoed these concerns, noting, "With ever-growing public concern, the status quo is not working and industry must do more to protect children." He urged platforms to act immediately, emphasizing that modern technology provides no excuse for lacking effective age assurance measures.
Potential Enforcement and Industry Responses
Ofcom plans to publicly report on platform responses in May, alongside new research assessing the impact of the Online Safety Act on children's online experiences in its first year. The regulator has warned that it "will be ready to take enforcement action" if responses are unsatisfactory, which could include strengthening regulations. The ICO has also indicated that "further regulatory action" may follow if high-risk services fail to comply.
In response, a YouTube spokesperson defended the platform's long-standing commitment to youth safety, expressing surprise at Ofcom's shift away from a risk-based approach. Meta, which operates Facebook and Instagram, cited existing measures such as AI-based age detection and Teen Accounts with built-in protections. Roblox highlighted its ongoing dialogue with Ofcom and the implementation of over 140 safety features in the past year, including mandatory age checks for chat access.
Broader Context and Support for Regulation
This regulatory pressure follows Australia's implementation of a social media ban for children in December last year, marking it as the first country to adopt such a policy. The Molly Rose Foundation, established in memory of a 14-year-old who died after viewing harmful content online, welcomed Ofcom's actions, describing them as "turning up the heat on reckless tech firms and their dangerous products which continue to cause daily harm to children."
As the consultation on a potential social media ban for under-16s continues without ministerial commitment, the focus remains on ensuring that tech companies prioritize child safety through immediate and effective measures.
