TikTok and Meta Allegedly Pushed Harmful Content for Views, Whistleblowers Claim
Whistleblowers and insiders from TikTok and Meta have exposed how the tech giants allegedly allowed harmful content on user feeds to maximize engagement, according to a BBC investigation. The revelations come as a new documentary, Inside the Rage Machine, airing tonight at 9 pm on the BBC, explores how the industry promoted problematic posts to increase views.
Internal Decisions Prioritized Engagement Over Safety
More than a dozen whistleblowers and insiders detailed how TikTok and Meta took risks with safety on issues including violence, sexual blackmail, and terrorist recruitment. Research indicates that algorithms were based on outrage, driving harmful content to users. One Meta engineer claimed senior management instructed them to allow more borderline harmful content, such as misogyny and conspiracy theories, to compete with TikTok.
TikTok's Alleged Prioritization of Political Cases
A TikTok trust and safety team member, Nick, shared internal documentation showing the company prioritized cases involving politicians over reports of harmful posts featuring children. For example, a trivial case mocking a political figure was rated higher than a 17-year-old in France reporting illegal cyberbullying. Another case involved a 16-year-old in Iraq with sexualized images shared on the app, which was given low urgency despite high risk.
Nick stated that when the team requested to prioritize cases involving young people, they were told to follow the existing rankings. He advised parents to delete the app and keep children as far away as possible, alleging the company prioritizes relationships with politicians over children's safety.
Meta's Reels Launch and Algorithm Concerns
A senior Meta researcher, Matt Motyl, revealed that Instagram Reels launched in 2020 without sufficient safeguards. Internal research showed Reels had significantly higher prevalence of harmful comments compared to the main Instagram feed:
- 75% higher for bullying and harassment
- 19% higher for hate speech
- 7% higher for violence and incitement
Documents indicated Meta's algorithm offered a path that maximizes profits at the expense of their audience's wellbeing. A former Meta engineer, Tim, claimed borderline harmful content was allowed as views declined, with decisions made by a senior vice-president reporting directly to Mark Zuckerberg to boost revenue by 2-3% quarterly.
Company Responses and Denials
Meta denied the whistleblowers' claims, stating they have strict policies and have invested significantly in safety over the last decade. They highlighted new Teen Accounts with built-in protections and parental tools. TikTok called the allegations fabricated claims, emphasizing investments in technology to prevent harmful content from being viewed, along with over 50 preset safety features for teen accounts.
A TikTok spokesperson added that the platform enables millions to discover new interests and supports a thriving creator economy in the UK, maintaining strict recommendation policies and user customization features.



