Something important happened last week, and in the midst of the Epstein Files, the Savannah Guthrie kidnapping, the Super Bowl, the Olympics, and the rest of the national noise, most people missed it.
New York passed a law requiring advertisers to disclose when an ad uses AI generated synthetic performers. That includes digital humans that look or sound real, even if they are not based on a specific celebrity. The rule applies to ads shown to New York audiences, regardless of where they were created, and it goes into effect in June 2026.
This is the first time a U.S. state has clearly said that audiences deserve to know when a human presence in media is not actually human. That idea may sound obvious, but until now it has lived mostly in ethics discussions and policy proposals, not law.
The White House has issued executive actions discouraging state by state AI regulation in favor of federal review. New York passed this anyway. That tension matters. It shows how fast this is moving and how little patience states have to wait for a national standard.
Advertising is where regulation tends to start because brands are measurable and dollars leave a paper trail. But creators are not far behind in this conversation.
Creators already operate in a space built on trust. People follow you because they believe you are real. Sponsored posts are disclosed. Filters are usually obvious. Editing is expected.
AI changes that equation in ways audiences are not prepared for yet.
Tools now exist that can replicate voices, generate faces, put words in your mouth, and scale your presence without your physical participation. None of this is inherently bad. A lot of it is interesting and useful and can improve storytelling and reach. I am personally into anything that helps me work more efficiently and create better output.
But it also changes the question audiences ask.
It is no longer just “Is this sponsored.”
It becomes “Who am I actually hearing from, and who decided that voice should exist.”
We already saw public pushback when Vogue used an AI generated model last year. The backlash was not about the tech. It was about trust, representation, and transparency. I spoke about this on CNN at the time because it felt like an early signal of how uncomfortable people are when human likeness is simulated without context.
This matters even more as companies like Meta build entire business units around AI generated content and AI optimized advertising. If platforms are selling AI created creative as a product to brands, regulators and audiences are going to start asking similar transparency questions of creators too.
This also comes as platforms like YouTube say they are cracking down on low quality AI slop channels. That stance sounds good in theory, but it is harder to imagine platforms holding advertisers to the same standard when ad dollars are involved. If anything, this makes disclosure even more important, because consumer protection should not depend on whether content is organic or paid.
New York’s law does not mention creators. It does not regulate influencers (human or AI) or touch organic content. Yet.
What it introduces is a principle that is hard to contain once it exists:
• That human likeness carries meaning
• That audiences deserve to know when what they are watching was made by a human or primarily by a machine
• That transparency matters
Here is where this gets messy. If every state passes its own version of an AI disclosure law with slightly different requirements, companies, platforms, and creators are stuck navigating a compliance maze. That slows innovation and advantages the biggest players who can afford legal teams. This is why the EU’s AI Act keeps coming up in these conversations. It is not perfect, but it treats AI governance like infrastructure, with consistent rules around risk, disclosure, and accountability across countries.
I wrote a deeper breakdown of how the EU is approaching this and what it could mean for creators and platforms. Read more here.
Transparency around AI is about to become a trust signal. Creators who decide their boundaries now will be ahead of the platforms that wait to be forced.
Keeping my eye on…
Discord's Mandatory Age Verification

Starting next month, Discord will require all 200+ million users globally to verify their age through face scans or government ID uploads—or accept severe restrictions. Users who don't verify will be locked into "teen" settings, losing access to age-restricted servers, DMs from strangers, and adult content. The backlash is fierce, especially since hackers breached a Discord vendor in 2025, exposing roughly 70,000 government IDs. Despite Discord's promises that face scans stay on-device and IDs are deleted immediately, users are furious about mandatory verification following a data breach. This could become a test case for whether platforms can force invasive age checks without losing their user base.
What happens if you don't verify
Other headlines to check out:
AI
Creator Economy
Web3
Friendly Reminder
“Nature does not hurry, yet everything is accomplished” - Lao Tzu
Remember, I'm Bullish on you! With gratitude,


