Can Governments Ever Keep Up with Big Tech? The Regulatory Race Against AI and Social Media
The prime minister is now confronting a stark reality: technology moves at lightning speed, while regulation crawls at a snail's pace. This fundamental mismatch has become painfully evident as artificial intelligence and social media platforms evolve far quicker than legislative frameworks can adapt.
The Online Safety Act's Struggle to Stay Relevant
The Online Safety Act was first presented to Parliament in 2019, more than two years before ChatGPT revolutionized how we interact with the internet. It took until 2023 for the legislation to be passed, with widespread enforcement only beginning in July of last year. Even now, certain elements remain unimplemented.
During this extended timeline, countless AI bots have entered mainstream usage, including X's Grok, CharacterAI's personalized agents, and Google's Gemini. The regulatory framework designed to govern online spaces was already outdated before it became fully operational.
Closing the AI Loophole
Today, the government announced it would close a significant loophole that previously exempted one-to-one conversations with AI bots from the same regulatory scrutiny applied to social media platforms. This represents a crucial update to the Online Safety Act, acknowledging that AI chatbots now represent a substantial portion of online interactions.
Sir Keir Starmer addressed this regulatory lag directly this morning, stating that if consultations determine a social media ban represents the best course for the UK, enforcement could now occur "within months, not years." This represents a significant acceleration from previous timelines.
The Human Cost of Regulatory Delay
The announcement included another important change: social media data of young people will now be preserved by default if they die. This measure aims to help bereaved families obtain answers more quickly about their children's deaths.
For Ellen Roome, who campaigned for this change after her 14-year-old son Jools Sweeney died in 2022, the government still hasn't done enough. Ms. Roome believes her son attempted a dangerous online challenge, but she has been unable to access his social media data to confirm her suspicions.
"Because of the campaign after my son's death, there'll be no more grieving parents having to beg platforms and no more delays while critical evidence disappears," she stated this morning. "But we must ultimately do more to stop children being harmed or dying in the first place. Preservation after death matters. Prevention before harm matters even more."
Calls for Stricter Social Media Bans
Ms. Roome has repeatedly advocated for banning children from social media entirely. She wants the government to surpass Australia's recent ban on under-16s, instead prohibiting everyone under 18 from accessing these platforms.
"At 16, you're still quite naive and young. I remember thinking I was very mature at 16. Looking back, I really wasn't," she explained last year.
While the government considers these proposals, it faces the monumental challenge of keeping pace with the rapidly evolving tech industry. If regulatory frameworks cannot adapt quickly enough, preventing further tragedies involving young people online will become increasingly difficult.
The tension between technological innovation and protective legislation has never been more pronounced. As AI chatbots become more sophisticated and social media platforms develop new engagement features, the government's ability to respond effectively will determine whether the Online Safety Act can fulfill its protective purpose or remain perpetually behind the curve.