What Europe Just Taught Me About AI, Trust, and the Future

A behind-the-scenes look at what I learned in Brussels, and how regulation, innovation, and public trust are colliding faster than anyone expected.

I just got back from an incredible trip to Europe, where I spent time in Brussels with people inside the rooms shaping the EU’s AI rules. I was invited by the EU Delegation to the United States and the Embassy of Italy in Washington, which gave me access, among many other things, to briefings and conversations with the leaders behind the AI Act. We throw around the AI Act like it’s just another headline, but being there made it feel real, immediate, and deeply necessary as the U.S. tries to understand its own path forward.

Instagram Reel

One of the most interesting voices we heard from was Brando Benifei, who entered Parliament at 27 and is now the longest-serving young member. He helped lead the AI Act itself, and you could feel how much weight and responsibility he carries as Europe steps onto the global stage.

Meeting with Brando Benifei

Everyone kept returning to the same set of questions: How do you regulate without stifling innovation? How fast can we push AI without eroding public trust? And with Europe acting first, what does that mean for everyone else?

The EU AI Act officially went into effect on August 1, 2024. It’s the first full, binding AI law anywhere in the world.

At its core, the idea is simple: the higher the risk an AI system poses to people, the more rules it has to follow. Some things are banned outright, like social scoring, predictive policing based on profiling, emotion recognition in schools or workplaces, and scraping people’s faces from the open internet to build biometric databases. Real-time facial recognition in public spaces is also mostly off the table, aside from a few narrow exceptions.

Other uses, like AI in hiring, healthcare, education, credit, or policing, aren’t banned, but they now come with strict requirements around testing, documentation, bias safeguards, and human oversight. Even the big general-purpose models fall under this framework.

What surprised me most wasn’t the legal language. It was the tone. There was almost no hype. People talked less about fears of “overregulation” and more about trust, accountability, and what it actually takes to make this work in the real world, in a way that protects people, not just companies. They were honest about missteps and clear that the system has to stay flexible as the tech evolves.

That mindset explains why the European Commission created a permanent AI Office to oversee enforcement. Europe sees this as long-term infrastructure, not a one-time policy win.

And the AI Act isn’t happening in a vacuum. It sits alongside broader digital rules like the Digital Services Act, which pushes platforms to be more transparent about how content is amplified or moderated, and adds new protections for minors, including limits on targeted ads and stricter safety requirements.

Outside of Europe, things are messier.

In the US, there still isn’t a single federal AI law. Most of what exists today comes through executive orders, agency enforcement, and state-by-state rules. The biggest federal move so far remains the White House executive order on Safe, Secure, and Trustworthy AI issued in October 2023.

States are filling in the gaps. Colorado passed one of the first comprehensive state-level AI laws focused on preventing algorithmic discrimination in high-risk systems, although parts of it are already facing delays.

The UK chose a softer path. Instead of one big AI law, it is relying on existing regulators and a centralized AI Safety Institute to evaluate frontier models.

Canada tried to move early with its Artificial Intelligence and Data Act under Bill C-27, but that effort stalled. A revised version is expected to return.

And beyond national borders, the Council of Europe adopted the first international treaty on AI, human rights, and democracy in 2024. The US, UK, and EU have all signed it.

What Europe has right now that others don’t is clarity. Not perfection, but clarity. Builders know where they fall. Companies know what counts as “high risk.” And responsibility doesn’t end with the model developer. If you deploy AI in sensitive contexts, accountability stays with you.

You can’t shrug and say, “We’re just using someone else’s model.” If you deploy AI to make decisions that affect people’s lives, you own the impact. As EU leaders kept saying: if something is illegal offline, why should it be legal online? The real questions are how you actually enforce that at scale, and, from an American perspective, how you do it without running into free-speech concerns.

The U.S., by comparison, is moving faster in experimentation in the race to beat China but slower in setting one shared baseline. That fragmentation creates uneven pressure on startups versus Big Tech. Big companies can absorb uncertainty; small ones usually can’t.

What the U.S. can take from Europe without “becoming Europe” is straightforward: Shared language around risk, baseline expectations for AI that touches jobs or health or money, and real technical expertise inside government. The rise of the UK AI Safety Institute and the EU AI Office shows where this is heading globally.

Instagram Reel

On the creator and media side, this all lands in a very personal way. AI accelerates everything, while trust becomes harder to earn and burnout easier to reach. No single law can fix that, but policy does shape which platforms, incentives, and business models dominate the landscape.

The AI Act won’t solve the cultural shifts, but it does move AI out of the “wild west” phase and into something more accountable. That alone changes the global tone.

The bigger story is simple:
Europe moved first.
The U.S. is still stitching together its approach.
The UK, Japan, Canada, and global institutions are layering coordination on top.

Whether these systems clash or start to align will depend on the pressure coming from citizens, creators, builders, and investors. I’ll keep tracking this from both sides, the policy side and the human side, since they’re far more intertwined than most people realize. I’m also paying close attention to UNESCO’s Guidelines for the Governance of Digital Platforms and what they could signal about a future global framework.

Other headlines to check out:

AI

Creator Economy

🎧 New Episode of The AI Download: Activism, Fashion & AI Collide: A Gen Z Founder’s Playbook (ft. Sophia Kianni)

In this episode of The AI Download, I sat down with Sophia Kianni to talk about what it really looks like to build a values-driven company in the middle of an AI boom. Sophia’s path from climate activism and Stanford to co-founding Phia with Phoebe Gates is a powerful example of how Gen Z is blending purpose, tech, and culture in real time. We talked about how AI can actually support sustainability when it’s used thoughtfully, how Phia is tackling fashion waste with data and secondhand shopping, and why community and distribution often matter just as much as the tech itself.

We also got honest about burnout, boundaries, and what it’s like to lead in your twenties while everything moves at AI speed. Sophia shared how she uses AI as a creative and productivity tool without losing her voice, and why she’s so focused on staying grounded while building for the long term in an industry that doesn’t sleep.

🎙️ Listen now and subscribe wherever you get your podcasts!

Gentle Reminder 🙏

“Ignore the glass ceiling and do your work. If you’re focusing on the glass ceiling, focusing on what you don’t have, focusing on the limitations, then you will be limited.” - Ava Duvernay

Want to advertise with me and sponsor #TheAlpha?
Reply to this email and lets chat :)

Remember, I'm Bullish on you! With gratitude,