For the past two years, we’ve been glued to the AI race. Jobs, chips, valuations, Sora, who’s winning, who’s losing, and whether our careers are next. And yes, all of that matters.
But while we were watching that scoreboard, something else was happening in the background. Something most consumers weren’t tracking. And this week, the consumer AI story collided with the national security one.
On Friday, the Trump administration ordered federal agencies to phase out Anthropic’s AI and the Pentagon labeled the company a “supply chain risk.” On Saturday, the U.S. and Israel launched a joint military strike on Iran, killing its Supreme Leader.
And somehow both of those things are part of the same story.
I’ve talked about this before. The US-China AI race isn't really about who builds the better chatbot. It's about power. Military power. Who has the most sophisticated AI wins the next war, the same logic that made nuclear capability the ultimate geopolitical currency for the last 80 years. AI is the new nuclear.
That's not a hot take. That's been the actual stakes all along. But it got buried under the noise of valuations and demo videos while the real chess match was happening in classified networks, defense contracts, and boardrooms we don't have access to.
So here's the backdrop you need: for the past year, the Pentagon has been quietly building out its AI infrastructure. Multiple companies have been in the mix. Palantir has been embedded in defense work for years. xAI signed on with no restrictions. Google's in negotiations. And the two big names most consumers actually use, Anthropic and OpenAI, both had or were seeking classified Pentagon contracts.
What Anthropic Actually Did and Why It Matters
In July 2025, Anthropic signed a $200M contract with the DoD, the first AI company cleared to run inside classified military networks, beating out OpenAI and Google. This was a big deal. Huge, actually.
But Anthropic had two conditions: Claude could not be used for mass surveillance of Americans, and it could not power autonomous weapons, systems that kill without a human making the call.
According to multiple reports, Defense Secretary Hegseth told CEO Dario Amodei: give us unrestricted access or lose everything. He threatened to invoke the Defense Production Act, a Cold War emergency law, to force their hand. Amodei didn't move. He said autonomous weapons are "outside the bounds of what today's technology can safely and reliably do." Then on Friday, the Pentagon designated Anthropic a "supply chain risk," a label normally reserved for companies connected to foreign adversaries like China or Russia. Trump also posted on Truth Social calling the company "leftwing nut jobs" and ordered every federal agency off their platform within six months.
There was actually a sign this was coming.
An Anthropic employee posted a pretty devastating message about the direction things were heading and how the future felt bleak from the inside. It read like a precursor. He clearly knew what was building.
Hours after Anthropic was blacklisted, OpenAI announced its own Pentagon deal. Founder and CEO Sam Altman says it includes the same protections including no mass surveillance or autonomous weapons. But the key difference: Anthropic wanted those limits in hard contract language. OpenAI agreed to use “for all lawful purposes,” according to its published agreement, and trusted current U.S. law to define the boundaries.
The problem with that? Laws change.
If this administration redefines what "lawful" means, and they've already pushed the edges with ICE's AI surveillance operations, the scope of that clause could shift with it. Altman himself acknowledged that a future legal dispute could expose OpenAI to the same fate as Anthropic. He signed anyway.
Palantir, for context, has stayed completely quiet through all of this. They've had Pentagon contracts for years, they operate deep in defense infrastructure.
Then Came Iran
On Saturday morning, a regular workday in Tehran, the U.S. and Israel launched a joint strike on Iran. Khamenei was killed. Iran retaliated across the Gulf, hitting Bahrain, UAE, Qatar, Kuwait and Israel. Dubai International even shut down indefinitely.
Trump campaigned on no more foreign wars. Now many fear this could spiral into something far bigger. And this is the geopolitical environment AI companies are now embedding themselves into.
A few weeks ago in this newsletter, I wrote about the growing number of people breaking up with OpenAI and switching to Claude. People were frustrated with the direction of the product and, more broadly, their trust in the company.
This week adds a new layer to that conversation.
The AI we use isn’t neutral. The companies behind it make choices, about government access, military contracts, and what lines they will or won’t cross when the pressure gets real.
Anthropic just showed you where their line is.
OpenAI just showed you theirs.
Palantir has never pretended to draw one.
The AI race was never just about who builds the best product. It’s about who shapes how this technology is used and who decides its boundaries when the stakes stop being theoretical.
PS: On a personal note, I’ll be paying closer attention to the companies I use and support. I want to back the ones deploying this technology in ways that advance humanity, not fracture it.
I’m also navigating my own feelings about jumping ship from ChatGPT, which I’ve unfortunately been in a long relationship with. I’ll let you know how that goes soon…
Other headlines to check out:
AI
Creator Economy
Web3
Friendly Reminder
Best way to get out of your own head is to volunteer and be of service. Quickest path to getting out of your internal loop.
Remember, I'm Bullish on you! With gratitude,



