Your daughter stopped eating lunch at school. She started spending hours in her room, door closed, face lit by the glow of her phone. When you finally convinced her to talk to someone, the therapist used words like major depressive disorder, anxiety, body dysmorphia. She mentioned something about social media, but you had always thought that was just what teenagers did. You wondered if you had missed something. If you should have been stricter about screen time. If this was somehow your fault.
Your son started wearing long sleeves in summer. The school counselor called about concerning posts he had been making online. The pediatrician asked about self-harm. You tried to remember when he changed, when the happy kid who used to play basketball in the driveway became someone who could not sleep, who talked about not wanting to be here anymore. The doctors said adolescence is hard. They said some kids are just more sensitive. They prescribed medication and suggested therapy, and you paid for both while watching your child disappear into his phone for six, eight, ten hours a day.
What no one told you, what your doctors likely did not know themselves, was that some of the largest technology companies in the world had conducted extensive internal research showing that their platforms were causing psychological harm to young users. They had the studies. They had the data. They knew the specific features that made it worse. And they made a business decision to keep those features anyway.
What Happened
Depression in adolescents looks different than it does in adults. It shows up as irritability, as sudden angry outbursts over small things. It looks like a teenager who cannot get out of bed for school, who stops seeing friends, who loses interest in activities they used to love. It feels like a weight they cannot name, a heaviness that makes everything harder.
Anxiety in young people often manifests as constant worry about how they look, what people think of them, whether they are good enough. It creates a racing mind that will not stop, a fear of being judged that becomes paralyzing. These kids start avoiding social situations. They have panic attacks. They cannot focus on schoolwork because their thoughts will not slow down.
Self-harm among teenagers has increased dramatically. Cutting, burning, hitting themselves. They describe it as a way to feel something when everything else feels numb, or as a way to release emotional pain that has nowhere else to go. Many say they saw it first on social media, that the platforms showed them how to do it, connected them with communities that normalized it.
Eating disorders in minors have reached crisis levels. Anorexia, bulimia, orthorexia, body dysmorphia. Young people, especially girls, who become obsessed with their appearance, who see filtered and edited images thousands of times a day and feel they can never measure up. They stop eating. They exercise compulsively. They develop distorted views of their own bodies that no amount of weight loss can fix. Some die from it.
The Connection
These platforms were engineered specifically to be addictive. The infinite scroll, the pull-to-refresh mechanism, the unpredictable reward schedule of likes and comments, the autoplay video, the algorithmic feed that learns exactly what keeps each user engaged. Every feature was designed based on psychological research about how to capture and hold human attention.
For developing brains, this engineering is particularly damaging. Adolescent brains are undergoing massive reorganization, particularly in areas responsible for self-control, emotional regulation, and reward processing. The constant dopamine hits from social media engagement interfere with normal development. The brain starts to crave those hits, to feel distress without them. This is not metaphorical addiction. Brain imaging studies show that social media activates the same neural pathways as cocaine and gambling.
A study published in JAMA Psychiatry in 2019 followed over 6,500 adolescents for three years and found that those who checked social media most frequently had significantly higher rates of depression. The relationship was dose-dependent: more use meant more harm. Research published in The Lancet Child & Adolescent Health in 2019 found that teenagers who used social media more than three hours per day were at heightened risk for mental health problems, particularly internalizing problems.
The mechanisms are well documented. Social comparison is constant and unavoidable. Every time a teenager opens these apps, they see curated highlight reels of other people appearing happier, more attractive, more successful. Research shows this creates what psychologists call upward social comparison, which directly damages self-esteem and increases depression and anxiety.
The platforms also create what researchers call fear of missing out. The algorithmic feeds ensure that teenagers see content about social events they were not invited to, experiences they are not having. This generates real psychological distress. Studies using experience sampling methods, where researchers ping participants throughout the day to assess their mood, show that teenagers feel worse after using social media than they did before.
The algorithmic recommendation systems are particularly harmful. They identify vulnerable users and feed them increasingly extreme content. A teenage girl who looks at one diet post will be shown thousands more, along with content about extreme weight loss, pro-anorexia communities, and exercise obsession. A teenager who searches for content about sadness or self-harm will be guided toward increasingly dark content, toward communities that normalize suicidal ideation.
The platforms also damage sleep, which cascades into every aspect of mental health. The blue light exposure, the stimulation, the fear of missing something if they put the phone down. Research published in the Journal of Youth and Adolescence in 2017 found that evening social media use was associated with sleep problems, which in turn predicted depression and anxiety symptoms.
What They Knew And When They Knew It
In 2017, Facebook commissioned research by a consulting firm that identified teenagers as a valuable untapped market. The research noted that young users were particularly engaged and represented future lifetime value. This was not incidental. It was strategic.
In Spring 2019, Facebook conducted internal research analyzing how Instagram affected teenage users, particularly girls. The research, revealed in documents released by whistleblower Frances Haugen in 2021, showed that 32 percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. The research found that among teens who reported suicidal thoughts, 13 percent of British users and 6 percent of American users traced the desire to kill themselves to Instagram.
The internal research stated explicitly: We make body image issues worse for one in three teen girls. The research showed that Instagram created social comparison and anxiety, particularly around beauty and body image. The documents noted that teens blamed Instagram for increases in anxiety and depression. This research was not published. It was marked internal only.
In March 2020, Facebook researchers created a presentation titled Social Comparison on Instagram. The research found that social comparison is worse on Instagram than other social platforms because Instagram is about bodies and lifestyle. The presentation noted that teens told researchers that they experience social comparison on Instagram in ways that affect their self-esteem. The researchers wrote: Social comparison is worse on Instagram.
In March 2021, Facebook researchers prepared another internal presentation showing that 14 percent of boys and 20 percent of girls in the US said they felt worse about themselves because of Instagram. The research showed that the harmful content was being actively recommended by Instagram algorithms, not just stumbled upon.
Internal documents show that in 2020, Facebook executives were presented with research showing that Facebook knew Instagram was pushing vulnerable teenagers toward harmful content about eating disorders. The research showed the recommendation algorithms were connecting teens who showed interest in dieting with increasingly extreme content about weight loss and pro-anorexia communities. Facebook did not change the algorithms.
In 2018, Facebook employees raised alarms about how the platform was being used to promote self-harm and suicide content, and how the recommendation systems were connecting vulnerable users with this content. Despite these warnings, the core algorithmic systems were not meaningfully changed.
TikTok internal documents from 2020, revealed in reporting by The Wall Street Journal in 2021, showed that the company understood its recommendation algorithm could create compulsive use. The documents described the goal as user value and product value, measured in time spent on the platform. Internal metrics tracked how quickly new users became addicted, measured as the time it took before a user would open the app without an external prompt.
A 2020 TikTok internal report analyzed teen users and found that the recommendation system rapidly identifies vulnerable users and feeds them increasingly extreme content. The research showed that teenagers who watched content about sadness or depression would be shown more and more of that content, pushing them into what researchers called rabbit holes.
TikTok documents from 2021 showed that company researchers were aware that teen users reported mental health harms. When TikTok conducted user research with teenagers, participants described feeling bad about themselves after using the app, feeling addicted, and feeling unable to stop using it even when they wanted to. The company continued to optimize for engagement rather than wellbeing.
Snapchat internal research from 2019 showed that the company understood its features created anxiety in young users. The streak feature, which rewards users for sending messages to the same person every day, was specifically identified in research as creating obligation and anxiety. Teenagers reported feeling stressed about maintaining streaks, feeling bad when streaks were broken, and feeling the feature turned friendship into work. Snapchat expanded the feature.
A 2020 internal Snapchat study found that the Snap Map feature, which shows where users friends are in real time, created fear of missing out and social anxiety. Teenage users reported that seeing where their friends were without them made them feel excluded and upset. The feature was not removed.
Documents from all three companies show that executives were repeatedly presented with research about mental health harms to minors. The documents show that when product design decisions conflicted with user wellbeing, the companies consistently chose engagement and growth over safety. When engineers proposed design changes that would reduce harm but also reduce usage time, those proposals were rejected.
How They Kept It Hidden
The companies funded external research through grants and partnerships with academic institutions, but these funding relationships came with strings attached. Researchers who wanted continued funding learned not to publish findings that made the platforms look bad. Several researchers have spoken publicly about pressure they experienced to soften findings or to delay publication of concerning results.
The companies also funded large studies on digital wellbeing that were designed in ways that made it difficult to find harms. These studies often used self-reported measures of social media use, which are notoriously unreliable, rather than actual usage data the companies possessed. They used cross-sectional designs rather than longitudinal designs, making it harder to establish causation. When these studies found no effects or small effects, the companies promoted them heavily.
Meanwhile, the companies internal research, which used actual behavioral data and more sophisticated methods, was finding significant harms. That research was kept internal. When Frances Haugen released thousands of pages of internal Facebook research in 2021, it revealed the gap between what the company was saying publicly and what they knew privately.
The companies cultivated relationships with select researchers and public health officials who were willing to be skeptical of social media harms. These researchers were given access to data, funding, and platforms to share their views. Their presence created the appearance of scientific debate, even as the internal research was increasingly clear.
The platforms also used their content moderation systems strategically. They would announce policies against pro-eating disorder content or self-harm content, generating positive press coverage. But the internal research showed these policies were not effectively enforced, and more importantly, the algorithmic recommendation systems were still actively connecting vulnerable users with harmful content. The policies addressed a small part of the problem while the algorithms continued to create the larger harm.
When researchers sought access to platform data to conduct independent research on mental health effects, the companies denied access or provided data under restrictive agreements that gave the company veto power over publication. This made it extremely difficult for the independent scientific community to assess harms.
The companies also used legal settlements with non-disclosure agreements to keep cases quiet. When families sued over teen suicides or eating disorders, the cases were settled with NDAs that prevented the families from speaking publicly about what they learned in discovery.
Meta employed a public relations strategy that framed mental health concerns as a societal problem rather than a platform design problem. Company executives gave speeches about the importance of digital wellness and announced small changes to settings while the core algorithmic systems that were driving harm remained unchanged.
Why Your Doctor Did Not Tell You
Medical schools do not teach about technology addiction or platform design. Most physicians in practice today completed their training before social media became ubiquitous. They learned about substance addiction, gambling addiction, but not about the specific psychological mechanisms of social media addiction.
The clinical research that physicians rely on, published in medical journals, lagged years behind the internal corporate research. While Facebook knew in 2019 that Instagram was harming teen girls, medical journals were still publishing studies debating whether social media effects were real or meaningful. Physicians reading that literature saw scientific disagreement, not the clear internal evidence the companies possessed.
Pediatricians and adolescent medicine specialists were seeing the mental health crisis in their practices but did not have clear guidance on how to address it. The American Academy of Pediatrics issued recommendations about screen time limits, but these were generic and not specific to the particular harms of algorithmic social media platforms. Doctors were recommending less screen time the way they might recommend more vegetables, without understanding the addictive engineering that made it extremely difficult for teenagers to comply.
There was also no clear place to put social media harm in the diagnostic framework. Depression is depression, whether caused by brain chemistry, trauma, or algorithmic manipulation. The treatment is the same: therapy and possibly medication. Most doctors did not have time in a 15-minute appointment to take a detailed history of which platforms, which features, how many hours per day, what content was being consumed. They treated the symptoms they saw.
The companies also created educational materials for doctors that downplayed risks. These materials emphasized the benefits of connection and community while minimizing or ignoring the mental health harms. Doctors who wanted to learn about social media effects might well have encountered company-funded materials first.
Additionally, there was a cultural narrative that dismissed concerns about social media as moral panic or technophobia. Doctors who might have been concerned were told they were being alarmist, that every generation worries about new technology, that the evidence was not clear. This narrative was actively promoted by the companies and by researchers who received company funding.
Who Is Affected
If your child or teen used Instagram, TikTok, or Snapchat regularly during adolescence and developed depression, anxiety, an eating disorder, or engaged in self-harm, they may have been harmed by these platforms. Regular use generally means daily use for more than an hour per day, but harm has been documented at lower levels of use as well.
The most vulnerable period appears to be early adolescence, roughly ages 11 to 15, when the brain is undergoing critical development and when teenagers are most susceptible to social comparison and peer influence. However, harm has been documented in younger children who used these platforms and in older teenagers as well.
Girls and young women appear to be disproportionately affected, particularly by Instagram. The internal research showed that body image issues, eating disorders, and depression related to appearance comparison were significantly worse for girls. However, boys are also affected, particularly by exposure to content promoting unrealistic body standards, by social comparison related to status and achievement, and by algorithmic radicalization toward extreme content.
Teenagers who already had vulnerabilities such as low self-esteem, previous anxiety or depression, trauma history, or social difficulties appear to have been especially harmed. The platforms algorithms identified these vulnerable users and fed them content that made their struggles worse.
If your child was hospitalized for mental health issues, if they attempted suicide, if they required residential treatment for an eating disorder, if they have ongoing self-harm behaviors, if they have been diagnosed with major depression or an anxiety disorder, and if they were regular users of these platforms during adolescence, there is likely a connection. The relationship is stronger if their mental health symptoms began or worsened after they started using the platforms, if they spent increasing amounts of time on the platforms, if they showed signs of compulsive use or inability to reduce their usage, or if the content they engaged with was related to their symptoms such as diet content for eating disorders or depressive content for depression.
Where Things Stand
As of 2024, hundreds of lawsuits have been filed against Meta, TikTok, and Snapchat on behalf of teenagers and young adults who suffered mental health harm. The cases are consolidated in multidistrict litigation in federal court, which allows for coordinated discovery and efficient handling of common issues.
In October 2023, dozens of states filed lawsuits against Meta alleging that the company knowingly designed Instagram to be addictive to children and that the company misled the public about the safety of its platforms. These lawsuits reference the internal research revealed by Frances Haugen and additional documents obtained through investigation.
School districts across the country have also filed lawsuits, arguing that social media platforms have created a youth mental health crisis that has overwhelmed school counseling services and disrupted education. These lawsuits seek to hold the platforms accountable for the costs of addressing the mental health crisis they allegedly created.
The legal theory in these cases is product liability: that the platforms are defective products because they were designed in ways that cause harm, particularly to young users. The cases argue that the companies knew about the harms through their internal research and failed to warn users or redesign the products to be safer.
Discovery in these cases is ongoing. The companies are being required to produce internal documents, research, emails between executives, and data about how their algorithms work. Some of this material is being filed under seal, but some is becoming public, revealing more detail about what the companies knew and when.
There have not yet been large jury verdicts in these cases, but that is typical for mass tort litigation at this stage. The cases are still in earlier phases of litigation. Bellwether trials, where a small number of representative cases go to trial first to help the parties assess the value of the claims, are expected in 2025 and 2026.
Some legal experts predict these cases could result in settlements or verdicts in the billions of dollars, similar to what occurred with opioid litigation. The number of affected young people is enormous, the internal documents are damaging, and juries tend to be protective of children.
The statute of limitations for these claims varies by state but is generally two to three years from when the injury occurred or when the plaintiff discovered or should have discovered the connection between the platform use and the injury. For minors, the statute of limitations typically does not begin to run until they turn 18, meaning that young adults who were harmed as teenagers may still be within the limitations period.
New cases are still being filed. Law firms are conducting investigations and filing claims on behalf of individuals and families. The litigation is expected to continue for several years.
Conclusion
What happened to your child was not random. It was not bad luck or bad genes or bad parenting. It was the result of specific design decisions made by some of the wealthiest and most sophisticated technology companies in the world, decisions made with full knowledge of the harm they would cause to young people.
The engineers who designed the infinite scroll knew it would be hard to stop scrolling. The data scientists who built the recommendation algorithms knew they would identify vulnerable teenagers and feed them harmful content. The executives who reviewed the internal research showing mental health harms knew they were prioritizing engagement and profit over the wellbeing of children. These were not accidents. These were choices. And the documents prove it.