UK’s Online Safety Act Comes into Force: Children Protected—but Privacy & Free Speech at Risk
Online Safety Act Takes Effect: Children Protected, But At What Cost?
Date Enforced: 25 July 2025
What Happened?
The UK’s flagship Online Safety Act is now enforceable, marking a major shift in the digital landscape. Platforms are now legally required to take active steps to protect users—especially children—from harmful or inappropriate content. This includes age verification, content moderation, and risk management related to algorithms. But critics argue the price of safety may be too high.
What Does the Act Require?
- Strict age verification using government ID, facial recognition, or third-party checks.
- Platforms must assess the risk their design and algorithms pose to children.
- Harmful content—even if legal—must be managed or removed.
- Non-compliance risks fines up to £18 million or 10% of global turnover.
- Ofcom gains expanded powers for enforcement and investigation.
Supporters Say It’s About Time
Ministers and campaigners claim the Act is long overdue. Technology Secretary Peter Kyle called it “a line in the sand” against years of unregulated exposure of children to damaging content. Baroness Beeban Kidron, a leading architect of the Act, said it ensures tech companies finally carry responsibility for child safety online.
Ofcom, which is now the lead regulator for online safety, says the framework provides long-awaited powers to tackle abuse, grooming, and dangerous trends spread through social media.
Privacy Groups Are Alarmed
Opponents warn the law is vague, excessive, and dangerous. The biggest concern: age verification. Platforms are now required to implement identity or facial checks—effectively ending anonymous access to large swathes of the internet for UK users. Privacy advocates say this could lead to data leaks, surveillance creep, and an erosion of basic freedoms.
The Electronic Frontier Foundation and Wikimedia Foundation have criticised the Act’s scope, warning it could force platforms to collect and store more personal data than ever before—even for users simply reading articles or watching videos.
Encryption Under Threat
The Act may also compromise end-to-end encryption. Ofcom has powers to demand that companies like WhatsApp and Signal find technical solutions to scan private messages for harmful content. These platforms argue there is no way to do so without weakening security for everyone.
Signal has publicly stated it will leave the UK if forced to break encryption, while WhatsApp has called the rules incompatible with privacy-by-design.
Mass VPN Usage and Workarounds
Since enforcement began, VPN usage in the UK has surged. ProtonVPN and other providers report record signups, as users seek to bypass age restrictions and maintain digital anonymity. Some adult sites, forums, and niche platforms have opted to block UK users entirely, rather than comply with costly and invasive verification processes.
Legal But Harmful: Who Decides?
Another point of controversy: the Act applies to content that is legal, but “harmful.” Critics say the definition is too broad and subjective, opening the door to censorship and overreach. For example, satire, political speech, or discussions of sensitive topics could be flagged for removal.
Smaller platforms may overcompensate, taking down borderline content just to avoid scrutiny. Some argue this creates a chilling effect, especially for controversial or dissenting viewpoints.
What Happens Next?
While some parts of the Act are already in force, others will roll out gradually over the next year. By March 2026, all regulated platforms must show they’ve adopted age assurance and content risk mitigation strategies.
Legal challenges are expected, especially from privacy and civil liberties groups. Ofcom has said it will work closely with tech firms—but also made clear that non-compliance will have consequences.
Conclusion
There is no doubt the internet needs reform. Children must be protected, and big tech should be held accountable. But the Online Safety Act may go further than necessary—eroding privacy, weakening encryption, and setting dangerous precedents for censorship.
Its long-term impact will depend on how regulators enforce it, how platforms respond, and whether the UK public demands accountability for both safety and freedom.
📰 Fidelis is free to read, but not free to produce.
☕ If this reporting matters to you, support us here:
👉 BuyMeACoffee.com/fidelisnews