The Age-Verified Internet: Do We Really Want to Be Completely Tracked?

The push for age verification isn’t just a UK experiment—it’s a global shift toward a fully tracked internet, one where every click, search, and comment could be tied to your real-world identity.

If you’ve been online long enough, you remember the Wild West days—when you could log on as SurfGirl92 and vanish into a forum without ever revealing your real identity. It was messy. Trolls thrived, scams ran rampant, and they still do. But there was also freedom. You could explore, connect, and speak without your government-issued ID following you around.

That world is disappearing.

In the UK, the new Online Safety Act is forcing a massive shift. As of July 25, 2025, any site or app hosting adult or “harmful” content must use “highly effective” age checks—meaning facial scans, government-issued IDs, credit-card verification, or digital identity wallets. No more casual “I’m over 18” clicks.

On paper, it’s about protecting kids from stumbling into the internet’s darkest corners—pornography, violent content, and self-harm forums. Platforms like Reddit, Discord, YouTube, and Xbox now face fines up to £18 million or 10% of global turnover if they fail to comply.

But the fallout has already gone far beyond what lawmakers claimed. As Taylor Lorenz wrote in The Guardian, “Over the past two weeks, the UK has reportedly blocked internet users’ access to everything from SpongeBob SquarePants gifs to Spotify playlists. Information about Joe Biden’s police funding plan has been restricted, along with a post about an up-and-coming political party.” Gamers have even reported being unable to change color settings in Minecraft.

And this isn’t staying in the UK. Lorenz points out that “Australia and Ireland have passed similar age verification measures. Denmark, Greece, Spain, France, and Italy have started testing a common age-verification app, paving the way for potential mandatory EU-wide use.” In the U.S., 11 states are already pushing their own laws, with Louisiana’s 2022 law serving as the template. The federal Kids Online Safety Act—co-sponsored by Senators Richard Blumenthal and Marsha Blackburn—could move forward at any moment. Lorenz notes that some political leaders have openly discussed using such laws to remove LGBTQ+ content or promote partisan agendas under the guise of “child safety.”

Here’s the uncomfortable truth: in trying to make the internet safer, we’re also dismantling one of its most important freedoms—anonymity—and opening the door to something even bigger: centralized control over what we can see, say, and share.

Privacy

Anonymity online doesn’t just protect bad actors. It shields whistleblowers, survivors, vulnerable teens in unsafe homes, and people who turn to communities like r/sexualassault or r/stopdrinking for support. Under these new rules, many of those spaces are now locked behind ID gates. The Electronic Frontier Foundation warns these systems could “undermine privacy and data protection, and limit freedom of expression.” VPN downloads in the UK are spiking, but even VPNs can be bypassed through fingerprinting and tracking.

Censorship

Once identity verification and content classification are centralized, censorship isn’t a hypothetical — it’s built in. What’s “harmful” today might be pornography or graphic violence. Tomorrow, it could be political dissent, LGBTQ+ content, or anything that challenges those in power. Even tech leaders like Marc Andreessen have warned the UK government that the law risks privacy violations, censorship, and harming everyday users.

Freedom

And here’s the bigger issue: once identity verification becomes the norm in one country, it rarely stays there. The U.S., EU, and other governments are already exploring similar rules. We can want safer spaces for kids — and for all of us — without building a fully tracked internet where every search, click, and comment is tied to a verified identity.

My hot take🔥: The question isn’t just whether kids are safer. It’s whether the rest of us can still be free — and whether “safety” is becoming the excuse to build the most surveilled, censored internet we’ve ever known.

Shira Lazar

ICYMI

OpenAI just dropped GPT-5, their most innovative, fastest, and most accurate model yet. It’s available to everyone (yes, even free users, although you will have to upgrade pretty quickly after a few uses) and can now reason through problems, hallucinate less, explain its limits, and give ‘safe completions’ instead of flat-out refusing risky questions.

Other headlines to check out:

AI

Creator Economy

Web3 

Gentle Reminder 🙏

“Don't let someone else's opinion of you become your reality.”

Advertise with Us

Remember, I'm Bullish on you!

With gratitude,