- #TheAlpha Newsletter
- Posts
- AI Moved Into Real Life This Week and It Got Messy Fast
AI Moved Into Real Life This Week and It Got Messy Fast
Hey Alpha Fam,
I’m just coming off an incredible trip to Dubai, where I emceed at the Billion Followers Summit. I’ll be sharing deeper takeaways and insights from the conference in next week’s newsletter.

What surprised me most? AI wasn’t actually the central conversation and honestly, that was a refreshing change.
Contrast that with CES, where AI was very much center stage. While quantum computing continues inching forward, let’s be real: it’s harder to explain and far less “sexy” for mainstream audiences.
At CES, AI was everywhere. In homes, cars, robots, wearables, chips, and assistants promising to be more personal, more helpful, and more present. At the same time, the recent Grok situation has become a serious warning sign of what happens when powerful AI tools are released without sufficient safeguards.
Grok 101: Grok is an AI chatbot built by xAI and integrated into X. It can generate text and images directly inside public posts. Recently, its image features were misused to create and spread non-consensual and sexualized images of real people, prompting backlash, platform restrictions, and regulatory scrutiny. The situation has raised serious concerns about AI safety, consent, and how quickly harm can spread when generative tools are built into viral social platforms.
The Grok situation is serious and it is not just a PR problem
Grok’s image generation features were widely used to create and circulate sexualized and non-consensual images of real people. The tool made it too easy to manipulate photos, respond publicly, and spread harmful content at scale.
This triggered immediate backlash and real consequences.
xAI moved to restrict Grok’s image generation after widespread criticism. Governments even stepped in. Malaysia temporarily restricted access to Grok following concerns over misuse. India reportedly pressed X on moderation failures related to Grok-generated content, leading to takedowns and enforcement actions.
It’s clear that this isn’t about regulating innovation here, this is about a system being released without protections strong enough to prevent predictable harm.
The key issue regulators and critics keep pointing to is this. While there will always be learnings when a product goes live, relying on user reporting after harm occurs is no longer acceptable. Especially when the product is capable of generating realistic images of real people and distributing them instantly through a viral platform.
Once something trends, the damage is already done.
The Grok situation highlights a larger problem across the industry. AI tools are being shipped quickly, often framed as playful or experimental, without being designed for how people actually behave online.
If an AI system can edit or generate images of real people, consent and safety have to be built in at the point of creation. Not added later. Not handled through moderation clean-up.
This is especially true on social platforms where visibility, replies, and virality are core features.
That means these tools shouldn’t ship without clear guardrails, stress-testing for misuse, and baseline certification standards, the same way we expect from products that impact public safety.
CES 2026 showed how deeply AI is moving into daily life
CES 2026 was a clear signal that AI is no longer just software, It is becoming physical, invisible by design, and embedded into everyday systems.
These major themes were consistent across the show.
AI is moving onto devices through on-device processing and dedicated AI chips. It is becoming embodied through robots and physical systems. And it is being positioned as an always-on layer across homes, cars, health, and work.
NVIDIA leaned heavily into this vision, framing AI as infrastructure that powers everything from data centers to robotics and autonomous systems. Samsung talked about AI companions that live across devices and environments. LG introduced its idea of “Affectionate Intelligence,” where AI senses, thinks, and acts in physical spaces.
Robots were everywhere. Not just as concepts, but as products meant to interact with people directly.
The message from CES was clear: AI is moving closer to us, into our spaces, our routines, and more intimate parts of life.
Trust is now the product
This is where CES and Grok collide.
The more present AI becomes, the higher the cost of failure. A chatbot mistake is annoying. A system that can be used to harass, exploit, or humiliate real people at scale is dangerous.
As AI becomes embedded into physical environments and social platforms, trust is no longer a nice-to-have; it is the product.
Companies cannot treat safety as a policy document or a reporting form. It has to be designed into how the system works from the start.
That includes limits, friction, consent checks, and clear boundaries around real people and real images.
Other headlines to check out:
AI
Creator Economy
Web3
Lawmakers are preparing to try again on major crypto bill. Why it matters and what happens next
Dubai bans privacy token use on exchanges, tightens stablecoin rules in crypto reset
Warren Presses SEC Over Crypto Risk as Trump Pushes Crypto Into Retirement Plans
Pitching Crypto and Needling Mamdani: Adams’s Post-Mayoralty Takes Shape
Maduro’s Crypto-Backed Oil Deals Put Tether at Center of Venezuela Money Drama
Coinbase Raises Pressure as Crypto Bill Moves to Senate Markup
Remember, I'm Bullish on you! With gratitude,
