The past two years have been an inflection point for social media, playing out across platforms, courtrooms, and public opinion. And as someone who works directly in digital mental health and creator well-being, I’ve been watching this moment with both urgency and caution. I care deeply about the young people growing up in these spaces, and I believe action is necessary, which is exactly why I think we need clarity, not just panic, and solutions grounded in evidence.

Back in 2020, The Social Dilemma raised alarms about algorithms, addiction, and how our brains are being shaped on social media. Since then, the conversation has only intensified. Jonathan Haidt’s The Anxious Generation poured fuel on the fire, and Australia just became the first country to pass a law banning kids under 16 from major platforms like Instagram, TikTok, Facebook, X, and YouTube. Platforms there now face fines of up to $49.5 million AUD if they don’t take “reasonable steps” to keep kids out.

And to be clear: inaction is not an option.

- Shira Lazar

In the US, it’s hitting a fever pitch. Lawmakers are holding hearings, states are filing lawsuits, and headlines scream that platforms are “addicting kids,” “rewiring brains,” and “causing a mental health crisis.”

Depending on who you listen to, this is either the long‑overdue moment of accountability for Big Tech or the start of a sweeping overcorrection that could permanently reshape the social web. I think the real risk is confusing visibility with progress, mistaking outrage for solutions.

So what’s actually happening here?

At the center of this moment is a landmark lawsuit in California targeting companies like Meta, TikTok, YouTube, and Snapchat. The claim is straightforward on its surface: these platforms intentionally design products to be addictive, causing harm to minors. If that framing sticks, it opens the door to sweeping regulation, new liability, and potentially a redefinition of how social media works.

It’s important to note that TikTok and Snapchat have settled (which says a lot I think), while the rest are fighting back.

But the science underneath these claims is far less settled than the rhetoric suggests, and if we care about kids’ mental health, we need to get the science right.

There are studies showing correlations between heavy social media use and anxiety, depression, or lower well-being in teens. Large reviews, including a 2020 systematic review from the UK, have found consistent associations between high-intensity use, such as excessive time spent or constant checking, and higher rates of anxiety and depression. Other research has linked heavy use to body image concerns and disrupted sleep, both of which are known risk factors for poor mental health.

Taken together, the evidence suggests that very high use, often defined as more than three hours a day, is associated with higher risk, particularly for adolescent girls. Sleep disruption, social comparison, and exposure to harmful or idealized content appear to be some of the key pathways driving these effects. The US Surgeon General’s 2023 advisory echoed these concerns, warning that excessive social media use may pose risks for youth mental health, especially when it interferes with sleep or offline relationships.

At the same time, many researchers emphasize that these relationships are complex and often bidirectional. Teens who are already struggling with anxiety or depression may be more likely to spend excessive time online, making it difficult to untangle cause and effect. Correlation isn’t causation, especially in a system as complex as adolescent mental health.

When you look closely at the data, the effects are often smaller and more nuanced than the headlines suggest. A 2024 evidence review from the National Academies of Sciences, Engineering, and Medicine found that most links between social media use and adolescent mental health are modest, highly variable, and shaped by context, including age, gender, and what’s happening offline. The report also concluded that current research does not establish clear causal evidence that social media use directly leads to mental illness.

Just as importantly, the timeline isn’t clear-cut. Signs of declining youth mental health began appearing before many of today’s biggest platforms. Smartphones changed daily habits, but so did academic pressure, economic stress, sleep patterns, and the constant pace of modern life. For critics, it remains difficult to separate what’s driving what.

That’s where journalists like Taylor Lorenz and outlets like Techdirt have pushed back, arguing that we’re repeating a familiar pattern.

Every generation has had its villain technology. For instance, novels were said to corrupt women, comic books were blamed for juvenile delinquency and television was accused of rotting brains. The telephone supposedly destroyed real relationships. Each time, a moral panic arrived before the evidence fully caught up.

One of the most contested parts of the current lawsuit is the word “addiction.”

Platforms are absolutely designed to form habits. Engagement is the business model, and features like infinite scroll and algorithmic feeds are built to keep you on the app.

But equating habit-forming design with chemical addiction and dependency is where some have pushed back. Addiction, in the clinical sense, involves tolerance, withdrawal, and physiological dependence. A peer-reviewed review published through the National Institutes of Health found that only a small minority of users, around 4 percent, reported experiencing anything resembling withdrawal-like symptoms when unable to access social media, such as restlessness or distress. The most common reported behavior was simply thinking about social media frequently, which researchers note is not unique to digital platforms.

Framing social media as inherently addictive doesn’t just stretch the evidence, it also risks removing human agency from the equation. The same research shows that a far larger share of frequent users, roughly half in some studies, recognize their habits and report being able to change them when motivated by sleep, school, or well-being concerns.

That doesn’t mean harms don’t exist. Some kids absolutely struggle online. Some adults do too. And as someone who advocates for mental health in digital spaces, I take those struggles seriously. But broad claims that social media is uniquely destroying an entire generation flatten a complex reality and risk leading us toward blunt solutions for nuanced problems.

There’s also a quieter consequence worth naming.

When social media itself is put “on trial,” journalists get swept up in it. Platform reporting becomes polarized and nuance is punished. If you question the panic, you’re accused of defending Big Tech. If you amplify harms, you’re accused of fearmongering. The middle ground, where most evidence actually lives, becomes harder to occupy.

And that should worry us.

Because once governments gain new authority to regulate speech and platform design under the banner of protection, that power rarely stays narrowly scoped. Legal scholars and digital rights groups have warned that addiction-based liability claims could weaken longstanding protections like Section 230, with ripple effects for creators, journalists, and smaller platforms that lack legal defenses.

As someone working on creator advocacy, labor protections, and mental health, I’m deeply concerned about this generation, the first to grow up entirely inside algorithmic systems.

And to be clear: inaction is not an option. We need stronger guardrails for minors, more transparency from platforms, and real support systems for kids and creators who are genuinely struggling.

But meaningful action starts with accurate diagnosis. That means age-appropriate design standards, independent data access for researchers, and mental health resources that meet young people where they actually are, online and offline. It also means addressing power imbalances without collapsing complex human behavior into a single villain.

In my own research on creator mental health, I’ve seen how digital environments can amplify stress, burnout, and emotional vulnerability, especially when people feel trapped inside systems they don’t control. Those findings mirror what many teens experience, too. The harms are real for some, and they deserve targeted, evidence-based interventions, not blanket assumptions.

What we shouldn’t do is let moral panic drive policy that overshoots, misfires, or unintentionally harms the very communities we’re trying to protect.

Kids deserve protection, not performative outrage. And if we care about their mental health, we owe them policies that are as thoughtful, nuanced, and evidence-based as the lives they’re growing up in.

Other headlines to check out:

AI

Creator Economy

Web3 

Friendly Reminder

F the algorithm. We need to go by our own rhythm, of our heart. - will.i.am

Remember, I'm Bullish on you! With gratitude,

Keep Reading

No posts found