UK Introduces Strict Online Safety Rules to Protect Children

Ofcom, the UK’s communications regulator, has announced stringent new rules requiring tech companies to block children’s access to harmful online content or face heavy fines under the Online Safety Act, The Guardian reported. The measures, effective from July 25, target social media, gaming, and search platforms.
Tech firms must implement “highly effective” age verification systems to identify under-18 users, filter harmful content from algorithmic recommendations, and establish rapid takedown procedures for dangerous material. High-risk platforms, including major social media sites, must also provide children with simple reporting tools.
Technology Secretary Peter Kyle called the rules a “watershed moment,” emphasizing the need to shield children from “toxic experiences” like self-harm, bullying, and pornography. He hinted at potential future measures, such as social media curfews, citing TikTok’s recent “wind-down” feature for under-16s.
Ian Russell, whose 14-year-old daughter Molly died after viewing harmful content, criticized the rules as “overly cautious” and profit-driven. His Molly Rose Foundation argued the measures fail to adequately address suicide-related content or dangerous online challenges.

Ofcom CEO Melanie Dawes defended the changes as a “reset” for child safety online, warning of enforcement against non-compliance. Companies risk fines or shutdowns if they miss the July deadline.
The rules, part of the Online Safety Act pending parliamentary approval, include over 40 obligations such as clearer terms of service for children, annual risk reviews, and dedicated accountability for child safety. The NSPCC welcomed the move but urged Ofcom to tighten oversight, especially where hidden online risks remain unchecked.