Happy Valentine's Day weekend, everyone. While some of you were buying chocolates and booking dinner reservations, a very different kind of breakup has been playing out across the internet, and this one involves AI.

People are leaving ChatGPT. 

Not all of them, and not overnight, but the trend is real and accelerating. And the rebound? It's Claude, Anthropic's AI assistant, that's been quietly winning hearts (and workflows) while OpenAI deals with what might be its messiest era yet.

So what's going on? Let's break it down.

To advertise or not to advertise, that is the question?

ChatGPT's app market share dropped from 69.1% to 45.3% between January 2025 and January 2026, according to data from mobile intelligence firm Apptopia. Web traffic tells a similar story. Similarweb data shows ChatGPT's market share falling from roughly 87% to around 65% in the same period.

Meanwhile, Claude may have a smaller user base overall, but a huge and interesting note is that people who use Claude really use it. 

According to Apptopia, Claude leads all AI chatbots in average time spent per daily active user at 34.7 minutes. Among power users, engagement surged 377% from June to December 2025. That's not casual browsing, that's people building their workflows around it.

Entrepreneur Jason Calacanis publicly announced he cancelled his $10,000/year corporate OpenAI account, saying Claude was better for corporate work. He's far from the only one making the switch.

If you watched the Super Bowl last weekend (congrats, Seahawks fans), you probably noticed AI companies dominated the commercial breaks. Adweek reported that 15 of 66 ads (23%) were either from AI companies or made with AI. 

But the one everyone was talking about? Anthropic's debut campaign for Claude. 

In one, a guy asks a beefy bystander for workout tips, only to get a robotic pitch for shoe insoles. In another, a man seeking advice about communicating with his mother gets served an ad for a dating app targeting older women. The tagline: "Ads are coming to AI. But not to Claude."

It was a direct shot at OpenAI, which had announced plans to start testing ads in ChatGPT's free and lower-cost tiers. And the timing couldn't have been more savage. Gizmodo reported that ChatGPT officially rolled out ads the very next day.

Sam Altman didn't stay quiet. He fired back on X, calling the Anthropic ads "funny" but "clearly dishonest," and accused Anthropic of serving "an expensive product to rich people." He argued OpenAI needs ads to keep AI accessible to the billions who can't pay for subscriptions.

The internet largely sided with Anthropic on this one. Altman, responding to a comedic Super Bowl ad with a corporate-sounding statement, didn't exactly help his case. And the data backs up the impact. BNP Paribas analysis reported by CNBC showed Claude's daily active users jumped 11% post-game, the biggest bump of any AI company that advertised. Claude also hit the top 10 free apps on the Apple App Store.

I believe trust is at the center of it all.

When you talk to an AI chatbot, you're often sharing things you wouldn't share publicly like work problems, health questions, personal dilemmas and financial concerns. Anthropic made this exact point in their blog post announcing the ad-free commitment, noting that a significant portion of Claude conversations involve sensitive or deeply personal topics.

OpenAI says ads won't influence ChatGPT's answers, and your conversations stay private from advertisers. But as Miranda Bogen, director of the AI Governance Lab at the Center for Democracy & Technology, pointed out: business models based on targeted advertising create dangerous incentives for user privacy, regardless of what platforms claim.

It's worth remembering that Sam Altman himself once called ads in ChatGPT "the last resort" and described mixing advertising with AI as "sort of uniquely unsettling." Now that it's happening, it feels like a line has been crossed for a lot of users.

Anthropic is betting on a different model: enterprise contracts and paid subscriptions. And it's working. Claude Code and Cowork have reportedly already brought in over $1 billion in revenue, according to Axios. The company also just closed a $30 billion funding round at a $380 billion valuation.

But Claude Isn't Perfect Either

Before we crown a winner, let's talk about the elephant in the room.

Anthropic launched Cowork, a tool that lets Claude operate directly on your computer, access your files, control your browser, and connect to enterprise apps. It's incredibly powerful but it’s also raised serious security concerns.

Just two days after launch, security researchers at PromptArmor demonstrated that attackers could trick Cowork into silently uploading your confidential files to their own Anthropic account through a prompt injection attack. No malware needed. And the vulnerability had been reported to Anthropic three months earlier and wasn't patched before launch.

Security Boulevard reported that Anthropic acknowledged the risks and told users to avoid granting access to sensitive files, while simultaneously marketing Cowork as a tool to organize your desktop.

As security researcher Simon Willison noted, it's not fair to tell regular non-programmer users to watch out for "suspicious actions that may indicate prompt injection." Most people wouldn't even know what that means.

Anthropic is winning the trust conversation on ads and privacy, but giving an AI agent direct access to your computer introduces a whole different category of risk. 

Both companies are asking users to trust them with deeply personal access. They're just doing it in different ways.

What we're watching is also a fundamental split in how they each think about their relationship with users. OpenAI is building an everything-platform: ads, subscriptions, enterprise, consumer, agents, image generation, video, and voice. Anthropic is positioning Claude as a focused thinking tool that works in your interest, not an advertiser's.

Neither approach is without tradeoffs. 

OpenAI's scale means more people get access to AI for free. Anthropic's model means a better experience, but primarily for people willing to pay. Neowin rightly points out that keeping frontier AI behind paywalls risks limiting access. Though Anthropic did just expand Claude's free tier this week to include file creation, connectors, and skills, a move clearly designed to attract ChatGPT users who don't want ads.

It’s about values too…

The trust question goes deeper than ads though. It goes back to who built these companies, why, and how they govern their AI.

Anthropic was founded in 2021 by Dario and Daniela Amodei, both former OpenAI employees who left over disagreements about safety and commercialization. They built it as a public benefit corporation from day one. Claude is trained using Constitutional AI, where the model checks its own outputs against a transparent set of ethical principles. The company operates under a Responsible Scaling Policy with AI Safety Levels that automatically tighten as models get more powerful, and it's committed not to deploy models unless adequate safeguards are in place. Dario has consistently pushed for government regulation, telling 60 Minutes he's "deeply uncomfortable" with a handful of tech CEOs making these decisions alone.

OpenAI launched as a nonprofit in 2015 to build AI that "safely benefits humanity." Since then, it's pushed toward becoming a for-profit, drawn a lawsuit from co-founder Elon Musk, and walked back its restructuring under pressure. Sam Altman once urged Congress to regulate AI, then called regulation "disastrous." And just this week, Business Standard reported that OpenAI quietly removed the word "safely" from its mission statement entirely.

For a growing number of users, these different corporate structures, safety frameworks and stances on regulation, matter. So, the reality is that the AI honeymoon phase might be over. One in five AI users now uses multiple apps, and the smartest move might be exactly that.

My advice? Date around a bit before you commit and do you own hands on research too. Happy Valentine's month! Choose your AI wisely. 💘


Other headlines to check out:

AI

Creator Economy

Web3 

Friendly Reminder

“Self-love is the source of all our other loves.” – Yung Pueblo

Remember, I'm Bullish on you! With gratitude,

Keep Reading