You started noticing it around eighth grade. Your daughter who used to sing in the kitchen became quiet. She stopped eating breakfast with the family because she was scrolling. She stopped playing soccer because she said her thighs looked wrong in the uniform. At night you would check on her and the blue light from her phone would illuminate her face at two in the morning, then three. Her pediatrician asked about depression. The therapist asked about anxiety. Everyone asked what had changed at home, at school, in your family. Everyone made you feel like you had missed something, done something wrong, failed to protect her somehow.
Or maybe this is your own story. You are twenty-two now and trying to understand why you spent your teenage years hating your body, cutting your arms, unable to sleep, unable to eat, unable to feel anything except the panic of not having enough likes. Your parents thought you were just moody. Your doctor prescribed antidepressants. Your school counselor said it was normal teenage stress. Everyone treated it like it was coming from inside you, like your brain had just decided to malfunction for no reason. You believed them. You thought you were broken.
What none of you knew was that the platforms your daughter used, the apps you grew up with, were designed by teams of engineers who studied addiction, who measured engagement in seconds, who tested which features would keep children scrolling past the point of exhaustion. The companies behind these platforms had internal research showing they were causing psychological harm to minors. They knew which users were most vulnerable. They knew and they did not stop. They built the machine that way on purpose.
What Happened
The pattern is consistent across thousands of families. A child, usually between the ages of eleven and seventeen, begins using social media platforms. At first it seems harmless. Keeping up with friends. Sharing photos. Watching videos. Then the use increases. An hour a day becomes three, then five, then eight. The child begins checking their phone compulsively, sometimes hundreds of times per day. They wake up at night to check notifications. They become distressed when separated from their device.
Alongside this compulsive use, parents and pediatricians begin seeing mental health changes. Depression that seems to come out of nowhere. Anxiety that makes it hard to attend school or social events. Obsessive thoughts about appearance and body image. Disordered eating, sometimes severe enough to require hospitalization. Self-harm, including cutting. Suicidal thoughts and in some cases suicide attempts. Sleep disruption that affects every aspect of functioning.
These young people describe feeling trapped. They know the apps make them feel terrible. They describe scrolling through content that makes them hate themselves, that makes them feel inadequate, ugly, unpopular, worthless. They describe seeing content that glorifies extreme thinness, that provides instructions for self-harm, that romanticizes suicide. They want to stop but they cannot. The pull to check, to scroll, to compare themselves to others becomes overwhelming. It feels exactly like addiction because it is addiction.
Parents describe children who changed. Outgoing kids who became isolated. Confident kids who became consumed with self-hatred. Happy kids who became suicidal. The platforms were the common variable. Remove the platforms and many of these young people begin to recover. Give them back access and the symptoms return.
The Connection
Social media platforms, particularly Meta properties like Instagram and Facebook, TikTok, and Snapchat, were engineered using behavioral psychology principles designed to maximize user engagement. These are not neutral communication tools. They are sophisticated behavior modification systems.
The mechanisms are well documented. Variable reward schedules, the same psychological principle used in slot machines, keep users checking for likes and comments. The infinite scroll removes natural stopping points. Autoplay ensures one video leads immediately to the next. Push notifications interrupt other activities to pull users back to the platform. Snapchat introduced streaks, a feature that punishes users for breaking daily engagement by making them lose status markers visible to their peers.
For adolescent brains, which are still developing and are particularly sensitive to social feedback, these features are especially powerful. Research published in 2016 by UCLA researchers showed that when teenagers see large numbers of likes on photos, including their own, the reward centers in their brains light up in patterns similar to those seen with drug and alcohol use. The platforms learned to exploit this response.
A 2017 study published in the Journal of Abnormal Psychology documented a massive increase in depression and suicide-related outcomes among adolescents between 2010 and 2015, coinciding precisely with the period when smartphone adoption and social media use became near-universal among teens. The increases were not small. Depressive symptoms increased by 52 percent among adolescents. Suicide rates for girls aged 13 to 18 increased by 65 percent.
Instagram created particular harm through its emphasis on appearance and its algorithmic promotion of extreme content. Research from the Facebook Papers, internal documents leaked in 2021, showed the company knew Instagram made body image issues worse for one in three teenage girls. Their own research found that among teens who reported suicidal thoughts, 13 percent of British users and 6 percent of American users traced the desire to kill themselves to Instagram specifically.
TikTok algorithms are particularly aggressive at identifying vulnerable users and feeding them harmful content. The platform learns what keeps each user watching and provides more of it. For teenagers struggling with mental health, this means being fed increasingly extreme content about self-harm, eating disorders, and suicide. A 2022 study by the Center for Countering Digital Hate created accounts posing as 13-year-old users interested in weight loss content. Within minutes, TikTok began recommending content promoting eating disorders. Within days, the accounts were being served this content every 39 seconds.
What They Knew And When They Knew It
Meta, the parent company of Facebook and Instagram, had detailed internal knowledge of the harm its platforms caused to minors. Documents released by whistleblower Frances Haugen in 2021 revealed years of internal research that Meta conducted but did not share publicly.
In 2019, Meta researchers produced an internal presentation titled Teens and Body Image. The presentation was shown to executives including CEO Mark Zuckerberg. The research found that 32 percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. The research specifically stated: We make body image issues worse for one in three teen girls. The company understood this was not about how teens used the platform but about how the platform was designed. Comparisons on Instagram can change how young women view and describe themselves, the internal research stated.
In March 2020, Meta researchers reported internally that Instagram was pushing some users with eating disorders toward more extreme content about starvation and purging. The researchers found that 100 percent of users they studied who followed fitspiration accounts were subsequently recommended extreme dieting content, thinspiration content, or content related to eating disorders within one week. The recommendation algorithms were making vulnerable users more sick.
Meta knew about Instagram addiction specifically. Internal research from 2018 found that teens blamed Instagram for increases in anxiety and depression. The research noted this was unprompted, meaning teens brought it up themselves without being asked. The same research found that among teens who reported experiencing suicidal thoughts, 13 percent of British users and 6 percent of American users traced these thoughts directly to Instagram. Meta had this information in 2018 and continued operating the platform the same way.
TikTok conducted similar research and came to similar conclusions. Internal documents obtained by reporters showed that TikTok employees understood the compulsive nature of the platform. A 2018 internal document described the company policy to assume that any time someone is not on TikTok is a missed opportunity. Engineers were instructed to maximize daily active users and time spent. The metric they measured was retention, meaning keeping users on the platform as long as possible regardless of the content that accomplished this goal.
In 2020, TikTok researchers conducted an internal study to understand the minimum amount of viewing time needed to form a habit. They determined that it took approximately 260 videos for a user to become addicted to the platform. They referred to this as the TikTok rabbit hole. Rather than warning users or implementing safeguards at this threshold, the company used the research to optimize the algorithm to get users to this point faster.
Snapchat introduced the streaks feature in 2015 with full knowledge of its addictive potential. Internal emails obtained through litigation showed executives discussing how streaks would increase daily active users by making teens feel obligated to use the platform every single day or lose social status. The feature was particularly effective with middle school users who were most vulnerable to peer pressure. Snapchat measured success by whether users opened the app within minutes of waking up. This was not an accident. This was the goal.
All three companies had research teams dedicated to studying teen users specifically. They knew which features were most addictive to adolescents. They knew which types of content caused the most psychological distress. They knew their platforms were being used to share content promoting suicide and self-harm. They knew vulnerable users were being fed increasingly extreme content by recommendation algorithms. They decided not to change course because engagement meant revenue.
How They Kept It Hidden
The concealment strategy was multilayered. First, the companies kept their internal research private. Meta conducted extensive studies on teen mental health and platform effects but published none of this research in peer-reviewed journals where it could be scrutinized by independent scientists. When researchers asked Meta for data access to study these questions, the company refused.
Second, the companies funded external research designed to produce favorable results. Meta provided grants to academic researchers studying social media and mental health. This funding created financial relationships that influenced which questions were asked and how results were interpreted. A 2022 analysis of social media research funding found that studies funded by Meta were significantly more likely to report neutral or positive effects compared to independently funded research on identical questions.
Third, the companies used trade associations and lobbying groups to dispute unfavorable research. When independent studies documented harm, particularly the correlation between social media adoption and teen mental health decline, these industry groups published responses questioning the methodology or arguing that correlation did not prove causation. They emphasized studies showing positive uses of social media for connection and community, shifting attention away from addictive design features and algorithmic harm.
Fourth, the companies designed their platforms to make the scope of the problem invisible to parents and physicians. Usage happens on personal devices. Time spent is not reported to parents unless they install monitoring software. The platforms provided tools that claimed to limit use, but these tools were easy for teens to bypass and the companies knew this. They could point to the existence of parental controls while designing the core product to circumvent those controls.
Fifth, when problems became visible, the companies blamed users. They characterized excessive use as a personal choice. They suggested that negative mental health effects were the result of how people chose to use the platforms, not how the platforms were designed. This shifted responsibility from the corporations that built addictive products to the children using them and the parents trying to manage them.
Sixth, the companies settled legal claims with nondisclosure agreements. When families sued over teen suicides or self-harm linked to social media use, the companies settled quietly with legal agreements preventing the families from discussing what they learned in discovery. This kept evidence of company knowledge out of public view and prevented other families from learning about the patterns.
Why Your Doctor Did Not Tell You
Most pediatricians and family physicians did not understand the scope of this problem until recently because the companies kept the research hidden. Medical training does not typically cover addictive technology design. Physicians learned about depression, anxiety, eating disorders, and self-harm, but they learned to look for family history, trauma, school stress, and brain chemistry. Technology use was not presented as a primary cause of mental illness.
When the correlations between social media adoption and teen mental health decline became visible in population data around 2017, the companies disputed the connection. Industry-funded researchers argued there was no causal proof. Physicians reading this debate in medical journals saw disagreement among researchers and did not know that one side of the debate was funded by the companies being studied.
The clinical picture was also confusing because social media harm looks like many other things. A teenager who is depressed because of Instagram looks clinically identical to a teenager who is depressed for other reasons. The standard diagnostic process involves asking about symptoms, family history, major life stressors, and trauma. Social media use was not on the standard checklist. Even when physicians asked about screen time, they often did not understand the difference between two hours of homework on a laptop and two hours of scrolling Instagram. The addictive design features and algorithmic amplification of harmful content were not visible to clinicians.
Furthermore, the platforms presented themselves as neutral communication tools, not products that could cause illness. Physicians understood that drugs could cause side effects. They understood that environmental toxins could cause disease. They did not have a framework for understanding that a communication platform could be pathogenic, that the design itself could make children sick.
Many physicians are only now, in 2023 and 2024, beginning to routinely ask about social media use as part of mental health assessment in adolescents. This change is happening because of the internal documents that became public in 2021, because of independent research that continued despite industry opposition, and because of the sheer number of families reporting the same pattern: a child who was fine, then started using these platforms intensively, then became mentally ill.
Who Is Affected
The lawsuits currently being filed involve minors who used Meta platforms including Instagram and Facebook, TikTok, or Snapchat and subsequently developed mental health conditions including depression, anxiety, eating disorders, body dysmorphia, or engaged in self-harm or experienced suicidal ideation.
The typical case involves a young person who began using these platforms between ages 11 and 17. Use was frequent, usually daily, often for multiple hours per day. The child had difficulty controlling their use even when they wanted to stop. Parents often describe finding their child on the phone late at night, seeing distress related to online interactions, or noticing the child becoming anxious when unable to access the platforms.
The mental health conditions developed during the period of heavy platform use. In many cases, the child had no prior history of mental illness. Parents describe a change, a decline that coincided with the social media use. Some young people required mental health treatment including therapy or psychiatric medication. Some were hospitalized for eating disorders, self-harm, or suicidal behavior. Some died by suicide.
In other cases, young people had mild anxiety or depression that became severe during the period of intensive social media use. The platforms did not necessarily create the mental illness from nothing, but they made existing vulnerabilities much worse.
The affected young people often describe feeling addicted. They knew the platforms made them feel terrible but they could not stop using them. They describe compulsive checking, distress when separated from their devices, and unsuccessful attempts to quit or reduce use. This is not a lack of willpower. This is the predictable result of using a product designed to be addictive.
Parents and pediatricians often describe trying multiple interventions without understanding the source of the problem. Therapy, medication, school changes, family changes. Some things helped temporarily but the mental health problems persisted as long as the intensive platform use continued. When families finally removed access to social media, many saw significant improvement, sometimes rapid improvement. This pattern, where symptoms resolve when the exposure stops, is medically significant. It points to causation.
The lawsuits also involve young adults who are now in their twenties but who used these platforms intensively during adolescence and experienced lasting harm. Someone who developed anorexia at age 14 while using Instagram and spent years in treatment qualifies even if they are now 23. Someone who engaged in self-harm throughout high school while compulsively using these platforms and still carries the scars qualifies. The harm does not disappear when someone turns 18.
If your child used these platforms for multiple hours per day during their teen years and developed depression, anxiety, an eating disorder, body image obsession, engaged in cutting or other self-harm, or experienced suicidal thoughts, your family may be affected. If you are a young adult who went through this yourself, you may qualify to participate in the litigation regardless of whether your parents understood what was happening at the time.
Where Things Stand
As of 2024, hundreds of lawsuits have been filed against Meta, TikTok, and Snapchat related to social media addiction and teen mental health harm. These cases have been consolidated into multidistrict litigation, meaning they are being coordinated in federal court for pretrial proceedings. The MDL is in the Northern District of California.
In addition to the federal MDL, cases have been filed in state courts across the country. School districts have also begun filing suits arguing that the mental health crisis caused by these platforms has created substantial costs and disruption for schools attempting to educate and support affected students.
Attorneys general from over 40 states filed suit against Meta in October 2023, alleging that the company knowingly designed features to addict children and teens to its platforms while misleading the public about the substantial dangers. The state lawsuits include claims under consumer protection laws and state laws prohibiting unfair business practices targeting minors.
The cases are still in relatively early stages. Discovery is ongoing, meaning attorneys are obtaining internal company documents, deposing executives and engineers, and gathering evidence about what the companies knew and when they knew it. Based on the internal documents already public, the documentary record of corporate knowledge is strong.
Bellwether trials, meaning representative cases tried to verdict to help both sides assess the strength of claims, are likely still several years away. Large-scale litigation of this type typically takes five to seven years from initial filing to resolution. However, the trajectory is similar to previous mass torts involving corporate products that caused widespread harm, particularly tobacco litigation and opioid litigation, where internal documents ultimately showed that companies knew their products were addictive and harmful and sold them anyway.
New cases are still being filed and accepted. The litigation is not closed. Families and young adults who experienced these harms can still bring claims. Most cases are being handled on a contingency basis by law firms that specialize in mass tort litigation, meaning there are no upfront costs to participate.
Several legal developments have strengthened the cases. In 2023, a federal judge denied motions to dismiss several of the claims, finding that plaintiffs had adequately alleged that the companies knew their products were dangerous and failed to warn users. This means the cases will proceed to discovery and potentially to trial. The judge specifically noted that the allegations regarding the companies internal research and their decisions not to disclose known harms stated valid legal claims.
The internal documents obtained from Meta through the Frances Haugen disclosures provide powerful evidence. These are not external allegations. These are the company own researchers, in the company own words, telling executives that the products were harming teenage girls and that the company was making the problems worse. That evidence exists in writing with dates and names attached.
Similar evidence is emerging in discovery from TikTok and Snapchat. While these companies have been more successful at keeping internal documents private, the legal process of discovery will compel them to produce research, emails, and strategic planning documents. If the pattern seen at Meta holds, these documents will likely show similar knowledge and similar decisions to prioritize engagement and profit over user mental health.
The legal claims include product liability, failure to warn, negligence, and in some cases wrongful death. The argument is not that social media as a concept is inherently harmful. The argument is that these specific companies designed their products using known principles of behavioral addiction, targeted minors, knew they were causing psychological harm, and failed to warn users or redesign the products to reduce harm. These are choices companies made. They are potentially liable for the consequences of those choices.
Your daughter did not fail herself. You did not fail her. Her brain did not randomly malfunction. What happened was the result of a business decision made by corporations that studied adolescent psychology, built products designed to exploit vulnerabilities in the developing brain, measured the harm they were causing, and continued operating the same way because it was profitable.
The engineers who built the infinite scroll knew what they were building. The executives who read the research about Instagram and teen suicide knew what they were reading. The product managers who designed streaks knew those features would make children feel obligated to open the app every single day. They knew because they studied it. They measured it. They wrote it down in internal documents. And then they built it that way on purpose. What happened to your child, what happened to you, was not an accident. It was the plan.