You started noticing it around middle school. Your daughter stopped coming down for dinner. She said she was not hungry, but you could hear her phone pinging through the bedroom door. When you finally got her to put it down, she seemed hollow. Anxious. Like something had been drained out of her. The pediatrician asked about screen time during the annual checkup, and you mentioned it, but the doctor nodded like this was just normal teenage stuff. Maybe it was just a phase. Maybe you were being too strict. Maybe this was just what growing up looks like now.
Then came the school counselor calling about the cuts on her arms. The calculations written in the margins of her notebook about calories and hours until she could eat again. The complete collapse that seemed to come out of nowhere, except it did not come out of nowhere at all. It came from somewhere very specific. It came from an app on her phone that was engineered, tested, and refined over years to keep her in a state of compulsive use. And the companies that built that app knew exactly what they were doing.
If your child developed depression, anxiety, an eating disorder, or engaged in self-harm after heavy social media use during their teenage years, what happened to them was not an accident. It was not bad parenting. It was not a character flaw or a failure of willpower. It was the documented result of design decisions made in Silicon Valley boardrooms by people who had research showing the harm their products caused and chose to hide it.
What Happened
The injury looks different in every child, but the pattern is the same. It starts with what feels like normal use. Snapchat streaks that cannot be broken. TikTok scrolling that goes from twenty minutes to two hours to four hours without them noticing. Instagram feeds that make them feel like everyone else is prettier, thinner, happier, more successful. The platforms are not just entertaining them. They are changing how their brains work.
Parents describe children who become irritable and anxious when the phone is taken away. Kids who wake up multiple times during the night to check notifications. Teens who stop participating in activities they used to love because nothing feels as stimulating as the feed. The pleasure they used to get from real-world experiences seems dulled. Food does not taste as good. Friends are not as interesting. Everything real feels flat compared to the algorithmically optimized content stream.
Then the mental health symptoms appear. Depression that goes beyond normal teenage moodiness. Clinical anxiety with panic attacks. Obsessive thoughts about body image that turn into restrictive eating or purging. Social comparison that becomes so painful they withdraw completely. Self-harm as a way to feel something other than the constant low-grade despair. Suicidal thoughts that terrify both the child and everyone around them.
Doctors diagnose generalized anxiety disorder, major depressive disorder, anorexia, bulimia, body dysmorphic disorder. They prescribe therapy and medication. They ask about stressors at school and home. But most do not ask the right questions about social media use, and even when they do, they do not connect it to the severity of what they are seeing. Because they were not told that these platforms were designed to be addictive. They were not told that the companies building them had internal research showing they were causing psychological harm to minors.
The Connection
Social media platforms are built on an advertising model that requires one thing above all else: engagement. The more time users spend on the platform, the more ads they see, and the more money the company makes. Every feature is designed to maximize that time. Every notification, every algorithm tweak, every new product release is tested against one metric: does it keep people on the platform longer?
For adults, this creates compulsive use patterns. For adolescents, whose brains are still developing, it creates something more severe. The teenage brain is going through a critical period of development in the prefrontal cortex, the region responsible for impulse control, emotional regulation, and decision-making. At the same time, the reward centers of the brain are in overdrive, making teenagers particularly vulnerable to addictive stimuli.
Social media platforms exploit this vulnerability through variable reward schedules, the same psychological mechanism used in slot machines. You do not know when you will get a like, a comment, a message, or a piece of content that makes you laugh or feel something. So you keep checking. The dopamine hit is unpredictable, which makes it more powerful. The adolescent brain becomes wired to seek that hit compulsively.
A 2017 study published in Psychological Science used MRI imaging to show that when teenagers saw photos with more likes on social media, the reward centers of their brains lit up significantly. The same regions associated with addictive substances. Another study published in 2019 in JAMA Pediatrics followed 6,595 adolescents over two years and found that those who checked social media more frequently showed significant increases in attention problems, aggression, and delinquent behavior.
The platforms also create comparison mechanisms that are particularly toxic for developing minds. Instagram and TikTok use algorithms that show users idealized images of bodies, lives, and experiences. For teenage girls especially, this creates a constant stream of upward social comparison. Research published in 2020 in the Journal of Abnormal Psychology showed rates of depression and suicide-related outcomes among teenagers increased significantly between 2010 and 2015, with the steepest increases among girls. The timeline corresponds directly with the rise of smartphone-based social media.
The eating disorder connection is especially direct. A 2021 study in the International Journal of Eating Disorders found that Instagram use was associated with a higher risk of orthorexia and general eating disorder symptoms. The platform promotes diet culture, body checking, and content that glorifies thinness through its algorithm. TikTok does the same, with videos tagged with eating disorder terms receiving millions of views before the platform removes them, only to have new ones appear immediately.
What They Knew And When They Knew It
In September 2021, Frances Haugen, a former Facebook product manager, released thousands of internal Meta documents to Congress and the media. These documents, which came to be known as the Facebook Papers, showed that Meta had been conducting extensive research into how its platforms affected teenage mental health. The research showed harm, and the company buried it.
One internal study from 2019 examined the impact of Instagram on teenage body image. The research, conducted by Meta, found that 32 percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. The study noted that this effect was not small or temporary. Among teens who reported suicidal thoughts, 13 percent of British users and 6 percent of American users traced the desire to kill themselves to Instagram.
Another internal document from March 2020 stated explicitly: We make body image issues worse for one in three teen girls. The research showed that Instagram was not just reflecting existing mental health problems. It was causing them. Teens who did not have body image issues before using the platform developed them. Teens who had mild concerns developed severe ones.
Meta also conducted research showing that Instagram is particularly harmful because it focuses on social comparison around bodies and lifestyles. An internal slide presentation noted that social comparison is worse on Instagram than other platforms. The research showed that TikTok was about performance and YouTube was about expertise, but Instagram was about bodies and lifestyle. This made it uniquely toxic for adolescent mental health.
The documents showed that Meta employees repeatedly raised concerns internally. In 2019, researchers presented findings to executives showing that teenage users felt addicted to Instagram and that the addiction made them feel worse about themselves and their lives. The researchers recommended changes to reduce compulsive use. Those changes were not implemented because they would have reduced engagement, which would have reduced revenue.
TikTok has been less forthcoming with internal research, but documents from a lawsuit filed by the state of Kentucky in October 2024 revealed similar knowledge. Internal communications showed that TikTok executives were aware that the average user session time was carefully optimized to maximize addictive behavior. Engineers discussed features designed to keep users watching for as long as possible, with particular attention to teenage users who showed the highest engagement rates.
A 2020 internal document from TikTok noted that users who spent more than a certain threshold of time on the platform each day showed signs of compulsive use but that reducing that time would significantly impact revenue. The document discussed this as a business tradeoff, not a child safety issue. The company chose revenue.
Snapchat has faced similar revelations. Documents released through discovery in litigation showed that Snapchat designed its streaks feature, which encourages users to send messages daily to maintain a count, with full knowledge that it would create compulsive use patterns in teenagers. Internal research from 2018 showed that teenagers experienced significant anxiety about losing streaks and that this anxiety kept them opening the app compulsively. The feature was maintained and expanded because it drove daily active use numbers.
A particularly damning internal email from a Snapchat executive in 2019 acknowledged that the streaks feature was causing stress among teenage users but noted that it was also the main driver of retention among that demographic. The email concluded that the feature should be kept as is. The executive wrote that while some users experienced negative effects, the overall engagement benefit was too valuable to lose.
How They Kept It Hidden
The social media companies employed multiple strategies to conceal the harms they knew about. The first was simply not publishing their internal research. Unlike pharmaceutical companies, which are required to disclose clinical trial data to regulators, social media companies operate with almost no regulatory oversight. They can conduct as much research as they want on the effects of their products and never tell anyone what they found.
When outside researchers tried to study the platforms, the companies made it difficult or impossible. Meta repeatedly denied researchers access to data that would allow independent study of how Instagram affected mental health. When researchers at Harvard and elsewhere requested access to conduct studies on teen mental health outcomes, Meta either ignored the requests or provided such limited data that meaningful research was impossible.
The companies also funded their own research that showed more favorable outcomes. Meta provided grants to outside researchers through its Foundational Integrity Research program. While the company claimed this research was independent, documents showed that Meta retained significant control over what could be published. Studies that showed minimal harm were published widely. Studies that showed significant harm were never released.
TikTok took a different approach by simply refusing to participate in research at all. The company provided almost no data to outside researchers and conducted no publicly disclosed research on mental health effects until after lawsuits were filed. By maintaining total opacity, the company ensured that no inconvenient findings could surface.
All three companies employed large lobbying operations to prevent regulatory oversight. Between 2019 and 2023, Meta spent over $80 million on federal lobbying. Much of that money went toward fighting legislation that would have restricted how platforms could target content to minors or required disclosure of research on mental health effects. TikTok spent over $20 million on lobbying during the same period, focused largely on avoiding restrictions on data collection from teenage users.
The companies also used their public relations operations to spread doubt about the connection between social media and mental health. When outside research showed harm, the companies issued statements questioning the methodology or noting that the research showed correlation, not causation. They funded counter-research that blamed parents, schools, or general societal factors for teen mental health declines. They published blog posts with titles like The Facts About Instagram and Teen Mental Health that cherry-picked data to minimize harm.
When pressed by Congress, executives testified that they took teen safety seriously and pointed to features like screen time reminders and content warnings. But internal documents showed that these features were largely cosmetic. They were designed to appear responsive to concerns without actually reducing the engagement that drove revenue. Documents showed that very few users actually changed their behavior based on screen time reminders and that the companies knew this when they implemented the features.
Why Your Doctor Did Not Tell You
Most pediatricians and mental health professionals were not aware of the scope of the problem because the companies hid the research. When Meta conducted studies in 2019 and 2020 showing that Instagram made body image issues worse for one in three teen girls, that research was not published in medical journals. It was not presented at pediatrics conferences. It was not included in any communications to healthcare providers. Doctors had no access to it.
The medical community was working from outside research, which showed concerning correlations but could not prove causation because the companies would not provide access to the data needed to do so. When doctors read studies showing that increased social media use was associated with increased depression in teenagers, they could not know that the companies themselves had conducted more rigorous research proving the causal connection.
Medical education also lagged behind the reality of what was happening. Most pediatricians trained before smartphone-based social media existed. Their education focused on traditional risk factors for adolescent mental health problems: family dysfunction, academic stress, substance use, trauma. Social media was so new that it was not integrated into diagnostic frameworks or treatment protocols in any systematic way.
Even as doctors began to see patterns in their practices, massive increases in anxiety, depression, eating disorders, and self-harm among teenage patients, they did not have good information about mechanism or causation. They knew social media was part of the picture, but they did not know how to advise parents specifically because there was no published research showing exactly what aspects of the platforms were most harmful or what patterns of use created the most risk.
The American Academy of Pediatrics issued general guidance about limiting screen time, but that guidance was not based on an understanding of the addictive design features of the platforms. Doctors told parents to set boundaries, but they did not tell them that the apps were engineered to undermine those boundaries. They did not tell them that every feature, from notifications to infinite scroll to algorithmic content delivery, was optimized to create compulsive use. They did not know that because the companies never told them.
Who Is Affected
The lawsuits focus on minors who used Meta platforms, which include Facebook and Instagram, TikTok, or Snapchat during developmentally critical years and subsequently developed mental health conditions. The typical qualifying scenario involves a child or teenager who began using one or more of these platforms between the ages of 10 and 18 and developed depression, anxiety, an eating disorder, body dysmorphic disorder, or engaged in self-harm or had suicidal thoughts after a period of heavy use.
Heavy use generally means using the platform for multiple hours per day over a period of months or years. The pattern often includes compulsive checking, difficulty stopping use even when the child wants to, anxiety when unable to access the platform, and continued use despite negative emotional effects. Many affected children describe feeling like they could not stop, even when they knew the apps were making them feel worse.
For eating disorders specifically, the connection typically involves heavy use of Instagram or TikTok with exposure to content about dieting, body transformation, what I eat in a day videos, or idealized body images. Many affected teens describe falling into algorithm-driven content patterns where the platform showed them increasingly extreme content about food restriction, exercise, or body modification because they had engaged with milder versions of that content initially.
For depression and anxiety, the pattern often involves social comparison, cyberbullying, or the stress of maintaining an online presence. Teens describe feeling like they had to be available constantly, had to respond to messages immediately, had to maintain streaks on Snapchat, or had to post regularly to maintain their social status. The compulsive use created sleep disruption, which worsened mood symptoms. The constant social comparison created feelings of inadequacy that developed into clinical depression.
Self-harm cases often involve a combination of factors. The platforms expose vulnerable teens to content that normalizes or even glorifies self-harm. The algorithms, designed to maximize engagement, show users more of what they interact with. A teen who searches for content about depression or self-harm gets shown more of that content, creating a reinforcing cycle. The social comparison and body image issues create psychological pain, and the platform provides both the distress and the suggested coping mechanism.
The legal claims do not require proving that social media was the only cause of the mental health condition. They require showing that the child used the platform heavily during adolescence, that the platform was designed in ways that created compulsive use and psychological harm, and that the child developed a diagnosed mental health condition during or shortly after that period of use. Many affected children had no history of mental health problems before heavy social media use began.
Parents often describe a before and after. Before Instagram or TikTok, their child was happy, engaged, doing well in school. After a year or two of heavy use, the child was withdrawn, anxious, depressed, or engaging in dangerous behaviors. The change was dramatic and seemed to have no other explanation. When the child stopped using the platform, either by choice or because parents intervened, symptoms often improved, though not always completely.
Where Things Stand
As of 2024, over 300 lawsuits have been filed against Meta, TikTok, and Snapchat on behalf of minors who developed mental health conditions after using their platforms. These cases have been consolidated into multidistrict litigation in federal court in California, which allows for coordinated discovery and pretrial proceedings. The consolidation occurred in October 2022, and the litigation is now in the discovery phase, where plaintiffs are obtaining internal documents from the companies.
Several states have also filed lawsuits. In October 2023, a coalition of 42 states sued Meta, alleging that the company knowingly designed Instagram to be addictive to children and misled the public about the safety of the platform. The state lawsuits cite many of the same internal documents that were released by Frances Haugen in 2021, as well as additional documents obtained through investigation.
The first bellwether trials, which are test cases used to help both sides evaluate the strength of their claims before resolving the larger litigation, are expected to begin in late 2025 or early 2026. These trials will involve individual plaintiffs whose cases are considered representative of larger patterns in the litigation. The outcomes will likely influence whether the companies choose to settle the remaining cases or continue to fight them individually.
No settlements have been reached yet in the mental health injury cases. The companies have maintained that they are not liable for how users choose to engage with their platforms and that parents are responsible for monitoring their children. However, the internal documents showing that the companies knew their products caused harm to minors have significantly strengthened the plaintiffs' position.
There is no deadline that has passed for filing these cases. Statutes of limitations vary by state, but many states allow minors to file suit within a certain number of years after they turn 18, which means that teenagers who were harmed years ago may still be within the window to file a claim. Some states also have discovery rules that allow the statute of limitations to start when the plaintiff learned or should have learned that their injury was caused by the defendant, which could extend the timeframe further given that the internal documents showing corporate knowledge were only recently made public.
The litigation is ongoing and actively developing. More internal documents continue to surface through discovery. More lawsuits continue to be filed as parents and young adults learn about the connection between the platforms and their mental health conditions. The legal landscape is moving toward accountability, but it is a slow process that will take years to fully resolve.
What This Means
If your child developed depression, anxiety, an eating disorder, or engaged in self-harm after heavy social media use, what happened was not random. It was not because they were weak or you were a bad parent or they made poor choices. It was because companies designed products to manipulate the reward centers of the developing adolescent brain and then hid their research showing the harm those products caused.
The feeling that you should have known, that you should have intervened sooner, that you should have seen the signs, is part of how this harm was able to continue for so long. The companies counted on parents blaming themselves. They counted on doctors not having enough information to connect the platforms to the injuries. They counted on being able to keep their internal research hidden long enough to build user bases so large that regulation would be difficult.
What happened to your child was a documented business decision. The companies chose engagement over safety. They chose revenue over the wellbeing of minors. They made those choices with full knowledge of what they were doing, and they made them over and over again, year after year, as the evidence of harm accumulated. The documents prove it. The timeline proves it. And now, finally, they are being held accountable.