You noticed it gradually, then all at once. Your child who used to read books at night now scrolled until 2 AM, the glow of the screen casting shadows across their face. The straight-A student stopped caring about homework. The athlete quit the team. Then came the morning you found the marks on their arms, or stood outside the bathroom listening to them purge, or read the journal entry about not wanting to be alive anymore. The pediatrician asked about social media use. You said yes, of course, but does not everyone use it? The therapist said your child had developed severe anxiety and depression. The psychiatrist added self-harm behaviors and body dysmorphia to the chart. You wondered what you had missed, what you had done wrong, why your child could not just put the phone down.
You tried everything. Screen time limits that led to screaming matches. Taking the phone away entirely, which resulted in your teenager sobbing that you were ruining their life, that all their friends were online, that they would have no one without it. You watched them shake and pace like someone in withdrawal. When you gave the phone back, they grabbed it with both hands and disappeared into it, their face cycling through expressions you could not read, their thumbs moving in that endless scroll. The mental health crisis deepened. Emergency room visits. A partial hospitalization program. Medications that helped only at the edges. Everyone kept asking what had triggered this, as if there had been some discrete traumatic event, but there was not one thing. There was just the phone, and the apps, and the slow erasure of the child you knew.
What the doctors did not tell you, because they did not know, was that your child had been caught in something engineered to be inescapable. The anxiety, the depression, the self-harm, the eating disorder, none of it was a failure of willpower or parenting or character. It was the result of deliberate design decisions made by some of the wealthiest technology companies in the world, companies that had their own research showing exactly what their products were doing to young minds, and chose profit over safety at every turn. They knew. And they did not tell you.
What Happened
The injury has many faces, but parents and young people describe it in remarkably similar ways. It starts with what feels like normal use. Checking Instagram to see what friends are doing. Scrolling TikTok before bed. Keeping Snapchat streaks alive because letting them break feels like losing a friendship. But somewhere along the way, the behavior changes from a choice to a compulsion.
Young people describe an inability to stop scrolling even when they want to. They reach for their phones within seconds of waking. They feel physical anxiety when separated from their devices. They check apps hundreds of times per day, often without conscious awareness that they are doing it. The platforms become the primary way they understand themselves and their social standing, through metrics that update constantly: likes, views, comments, followers, streaks.
The mental health effects follow predictable patterns. Anxiety develops around social comparison, never knowing if you are pretty enough, thin enough, popular enough, because the feed provides an endless stream of others who appear to be doing better. Depression sets in as real-world activities lose their appeal and online validation becomes the primary source of dopamine. Sleep deprivation becomes chronic, with young people staying up for hours scrolling, unable to put the phone down even as exhaustion overwhelms them. For girls especially, eating disorders and body dysmorphia spike as they encounter algorithm-driven content showing idealized and often digitally altered bodies, along with pro-anorexia content that the platforms actively recommend to users who show interest.
Self-harm follows its own terrible logic. Young people who feel anxious or depressed search for content about those feelings. The algorithms interpret this as engagement and serve more of the same. A teenager looking up information about suicide gets recommended more suicide content. Someone who watches one video about self-harm gets dozens more. The platforms create echo chambers of despair, and young people describe falling into holes of increasingly dark content, their mental state deteriorating with each video.
Parents describe children who have become strangers. Explosive anger when asked to put the phone away. Social withdrawal from family and real-world friends. Declining grades and lost interest in activities they once loved. Many parents say they feel like they are competing with an invisible force for their child, and losing. The phone is always more compelling, always waiting, always offering something that real life cannot match. Young adults who have aged out of the acute crisis describe lost years, educational opportunities missed, relationships damaged, and a sense that their adolescence was stolen by apps that promised connection but delivered isolation.
The Connection
Social media platforms are not neutral tools. They are persuasive technologies designed by experts in behavioral psychology to maximize user engagement, which is corporate language for time spent on the app. The mechanisms are well understood because many of them were designed by people who have since become whistleblowers.
The core mechanism is intermittent variable rewards, the same psychological principle that makes slot machines addictive. When you pull down to refresh your feed, you do not know what you will get. Sometimes it is interesting, sometimes boring, sometimes it is exactly the validation you were hoping for. This unpredictability triggers dopamine release in the same brain pathways involved in gambling addiction. The adolescent brain, which is still developing impulse control and reward regulation through the mid-twenties, is particularly vulnerable to this exploitation.
A 2019 study published in JAMA Pediatrics followed 6,595 adolescents over two years and found that those who checked social media more frequently showed significant increases in depression and anxiety symptoms. The dose-response relationship was clear: more use meant worse outcomes. Another study published in The Lancet Child & Adolescent Health in 2019 found that teenagers who used social media more than three hours per day were at heightened risk for mental health problems, particularly internalizing problems like depression and anxiety.
The platforms also employ infinite scroll, a design feature that eliminates natural stopping points. Before infinite scroll, you reached the end of a page and had to make a conscious decision to click for more. Now the content never ends, and the user must actively decide to stop, fighting against both their own dopamine-driven compulsion and the platform design. Autoplay does the same thing with videos. You do not choose the next video. It simply starts, and then the next one, and the next one, removing agency and decision-making from the experience.
Push notifications are interruption machines designed to break your attention and pull you back to the app. Internal research from these companies shows they know exactly how effective notifications are at driving engagement, which is why they send them constantly and make them difficult to fully disable. Even when the phone is face down, users report feeling phantom vibrations and compulsively checking for notifications that did not happen.
For young girls, Instagram and TikTok present a particularly toxic combination. Research published in The Wall Street Journal in 2021, based on leaked internal Facebook studies, showed that Instagram made body image issues worse for one in three teenage girls. The algorithms preferentially show content featuring idealized bodies, cosmetic surgery, and extreme dieting. When a young person engages with this content, even to criticize it or understand it, the algorithm serves more of it. Girls report spending hours comparing themselves to images that are often digitally altered or show bodies maintained through disordered eating, and the platforms know this is happening because they have studied it extensively.
The social comparison is not incidental. It is the product. These platforms generate revenue through advertising, and they sell advertisers access to young people in vulnerable psychological states. An anxious teenage girl comparing herself to influencers is a highly engaged user who will see more ads, and those ads increasingly target her insecurities. The business model requires creating and maintaining psychological distress.
What They Knew And When They Knew It
The companies did not stumble into this harm unknowingly. They researched it, documented it, and in many cases designed for it.
Facebook, which became Meta in 2021, conducted extensive internal research on teen mental health and Instagram. In 2019, Facebook researchers created a presentation titled We Make Body Image Issues Worse for One in Three Teen Girls. The research showed that among teens who reported suicidal thoughts, 13 percent of British users and 6 percent of American users traced the desire to kill themselves to Instagram. The presentation noted that social comparison is worse on Instagram than other platforms because Instagram focuses on body and lifestyle. Facebook researchers wrote: Teens blame Instagram for increases in the rate of anxiety and depression. This reaction was unprompted and consistent across all groups.
Facebook knew this in 2019. The company did not make these findings public. When asked about teen mental health, Facebook executives publicly claimed their research showed Instagram was broadly positive for teen wellbeing. Internal documents obtained by whistleblower Frances Haugen and reported in The Wall Street Journal in September 2021 revealed the gap between what Facebook knew internally and what it said publicly. The company had been studying teen mental health impacts since at least 2019 and had extensive documentation of harm, particularly to teenage girls.
In March 2020, Facebook researchers reported that 32 percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. The research noted that Instagram improved those feelings for only 2 percent of respondents, meaning the platform made body image worse at a rate 16 times higher than it made things better. The same research found that teens experiencing body image issues traced it to Instagram more than to any other source, including experiences with their own bodies or feedback from other people.
TikTok has been less penetrable to researchers and journalists, but internal documents from European regulatory investigations reveal similar knowledge. In 2020, TikTok conducted research showing that compulsive use was common among young users and that the platform employed design features specifically to maximize addictive engagement. The company documented that its recommendation algorithm was highly effective at keeping users watching, and that users, including minors, regularly watched content for hours without intending to do so. Documents from a 2023 Kentucky lawsuit against TikTok revealed internal communications describing minors as a growth opportunity and discussing features designed to increase engagement among young users despite known mental health risks.
Snapchat has faced similar revelations through litigation discovery. A 2023 New Mexico lawsuit revealed that Snapchat executives were aware that the app was used to facilitate sexual exploitation of minors and that design features like disappearing messages and Snap Maps created risks the company had documented internally. Discovery documents showed Snap Inc. had research indicating that the streak feature, which shows how many consecutive days two users have exchanged messages, created psychological pressure particularly harmful to young users who felt compelled to maintain streaks even when they wanted to stop using the app. Internal communications showed executives discussing the streak feature as a key driver of daily active users, with awareness that it created compulsive use patterns.
All three companies employed design ethicists and researchers who raised internal concerns. Facebook whistleblower Frances Haugen testified before Congress in October 2021 that Facebook knew Instagram was harmful to teens and chose not to act. She provided internal documents showing that Facebook dissolved a team working on teen wellbeing and ignored recommendations to make Instagram less toxic for young users because those changes would have reduced engagement and therefore revenue.
In 2021, a consortium of news organizations published The Facebook Files, based on Haugen's leaked documents. The reporting showed that Facebook had conducted extensive research into teen mental health between 2019 and 2021, had quantified the harms in detail, and had repeatedly chosen not to implement changes that would reduce harm because those changes would hurt engagement metrics. One internal document stated: We are not actually doing what we say we do publicly. The gap between what Facebook claimed in congressional testimony and what it knew internally was not a matter of interpretation. It was a documented pattern of misrepresentation.
How They Kept It Hidden
The companies used sophisticated strategies to avoid accountability and maintain public perception that their platforms were safe for young users.
First, they relied on terms of service that require users to be 13 or older, knowing full well that enormous numbers of younger children were using the platforms. Internal Facebook documents showed the company was aware that millions of users under 13 were on Instagram, but the company did not implement effective age verification because doing so would have reduced its user base. The age limit provided legal cover while having almost no practical effect.
Second, they funded and promoted research that showed their platforms in a positive light while burying or ignoring research that showed harm. Facebook funded academic research on social media and wellbeing, but internal documents show the company was selective about which findings it promoted. Studies showing positive or neutral effects were publicized. Studies showing harm were described internally as a "PR fire" to be managed. This created a scientific literature that appeared mixed, when in fact the company's own internal research was far more conclusive about harm.
Third, they used design changes as public relations tools while preserving the core engagement-maximizing features. Instagram announced in 2019 that it would test hiding like counts to reduce social comparison pressure. The company promoted this as evidence it took teen mental health seriously. Internal documents show Instagram had research proving like counts were harmful to teens, but the test was limited in scope and designed to generate positive press rather than to actually solve the problem. The core algorithmic ranking and infinite scroll features that drive compulsive use and social comparison remained unchanged.
Fourth, they deployed lobbying and political pressure to avoid regulation. Meta, TikTok, and Snapchat collectively spend tens of millions of dollars annually on federal lobbying. They have fought state and federal legislation that would restrict data collection on minors, require design changes to reduce addictive features, or create liability for mental health harms. When the Kids Online Safety Act was proposed in Congress, tech industry groups lobbied extensively to weaken its provisions, arguing that the companies could self-regulate effectively, even as internal documents showed they were not doing so.
Fifth, they used content moderation as a shield. The companies pointed to investments in content moderation and mental health resources as evidence they were addressing the problem. But content moderation focuses on individual pieces of harmful content, not on the algorithmic systems that recommend harmful content to vulnerable users. Internal documents show the companies understood that the problem was not primarily about individual bad posts, but about systemic design features that drove compulsive use and algorithmic recommendations that created harmful rabbit holes. Content moderation allowed them to appear responsive while avoiding changes to the business model.
Sixth, they settled lawsuits quietly and under seal. When families sued over suicides or eating disorders linked to social media use, the companies fought aggressively and often sought settlement agreements with non-disclosure provisions. This prevented the public from learning what the companies knew and when, and it prevented patterns of harm from becoming visible. Each family was isolated, unable to know how many others had experienced the same thing.
Why Your Doctor Did Not Tell You
Most pediatricians and mental health professionals were not aware of the extent of the risk because the companies controlled the information pipeline. Medical education and clinical guidelines are based on published research, and the published research on social media appeared mixed or inconclusive for years, in part because the companies had funded and promoted research that obscured the harms their own internal studies had documented clearly.
Pediatricians were aware that excessive screen time might be suboptimal, but they generally understood social media as a neutral tool that could be used well or poorly, like television or video games. They did not know that these platforms employed teams of psychologists and engineers working specifically to maximize addictive engagement in young users, or that the companies had extensive internal research showing serious mental health harms.
The American Academy of Pediatrics issued guidelines recommending limits on screen time, but these guidelines framed the issue as one of moderation and parental supervision, not as exposure to a product designed to be addictive and known by its manufacturers to cause depression and anxiety in adolescents. Doctors were advising parents to set boundaries, without understanding that the platforms were designed specifically to circumvent those boundaries and that the companies had studied how to do so.
Mental health providers were even further behind the curve. When teenagers presented with depression and anxiety, clinicians looked for traditional causes: trauma, family conflict, academic stress, substance use. Social media use was assessed as a symptom or correlate, not as a primary cause. The idea that an app could cause a major depressive episode or contribute to suicidal ideation was not part of standard diagnostic thinking, because the research showing causation was largely hidden in internal company documents.
By the time Frances Haugen released the Facebook Files in 2021, and journalists began reporting on the internal research showing clear evidence of harm, many young people had already spent years on these platforms during critical developmental periods. The medical community began to take the issue more seriously after 2021, but the damage to a generation of adolescents was already done. Doctors are now playing catch-up, trying to treat epidemic levels of anxiety and depression in young people without a clear treatment protocol for what is essentially a mass exposure to an environmental toxin.
Who Is Affected
If your child or you as a young adult used Instagram, TikTok, or Snapchat regularly during adolescence and developed depression, anxiety, an eating disorder, body dysmorphia, or engaged in self-harm, you may have been affected by the documented harms these platforms cause.
The typical exposure pattern looks like this: starting use between ages 10 and 17, using one or more platforms daily, often for multiple hours per day, and developing mental health symptoms during the period of use. Many young people describe a clear before and after, a time when they were mentally healthy and a decline that corresponded with increased social media use.
Girls and young women are disproportionately affected, particularly by Instagram and TikTok, which drive body image issues and eating disorders. Internal Meta research showed the harms were concentrated in teenage girls, and eating disorder treatment centers report that the majority of their adolescent patients describe social media as a significant contributor to their illness.
Young people who identify as LGBTQ are also heavily affected, though in complex ways. Many describe social media as a crucial connection to community and support, but also report that the platforms exposed them to intense harassment and algorithmic content that worsened mental health. The platforms created dependency by being simultaneously harmful and necessary for social connection.
The most severe outcomes, including suicide and suicide attempts, are associated with heavy use of multiple platforms, exposure to pro-suicide or pro-self-harm content that the algorithms recommended, and cyberbullying that the platforms failed to address effectively. Families of young people who died by suicide have found evidence in their children's devices showing algorithm-driven recommendations of suicide content in the days and weeks before their deaths.
If you are a parent and this sounds like your child, or if you are a young adult and this sounds like your experience, you are not alone. Tens of millions of young people used these platforms during adolescence. The companies' own internal research suggested that millions experienced mental health harm as a result. This was not a small-scale problem or a matter of a few vulnerable individuals. It was a mass exposure event, affecting an entire generation.
Where Things Stand
The legal landscape is evolving rapidly. As of 2024, hundreds of lawsuits have been filed against Meta, TikTok, and Snapchat by families, schools, and state attorneys general, alleging the companies knowingly designed addictive products that harmed minors and failed to warn users of the mental health risks.
In October 2023, dozens of states filed a joint lawsuit against Meta, alleging the company violated consumer protection laws by misrepresenting the safety of Instagram and Facebook to parents and minors. The complaint cited internal Meta documents showing the company knew its platforms were causing psychological harm to children and deliberately hid that information from the public. The lawsuit seeks injunctive relief requiring Meta to change its design practices and civil penalties that could reach into the billions of dollars.
Similar multi-state actions have been filed against TikTok, alleging the company designed its algorithm and features to be maximally addictive to young users while knowing the mental health consequences. State attorneys general have been particularly focused on TikTok's recommendation algorithm and the way it funnels young users into harmful content rabbit holes.
Individual personal injury lawsuits are proceeding in state and federal courts around the country. These cases involve young people who developed severe mental health conditions, including some who died by suicide, with allegations that the platforms caused or substantially contributed to the harm. Many of these cases are in early stages, with discovery ongoing. The companies have fought aggressively to dismiss the cases or limit discovery, but courts have increasingly allowed the cases to proceed, finding that families have stated plausible claims that the platforms were defectively designed and that the companies failed to warn of known risks.
In December 2023, a federal judicial panel consolidated hundreds of social media cases into a multidistrict litigation in the Northern District of California, similar to the way mass tort cases involving defective drugs or medical devices are handled. This consolidation allows for coordinated discovery and may lead to global settlements, though it also means individual cases may take years to resolve.
Several schools and school districts have filed lawsuits alleging that social media platforms have caused a youth mental health crisis that has overwhelmed school counseling resources and disrupted education. These institutional plaintiffs have significant resources for litigation and have been effective at obtaining internal company documents through discovery.
The first trials are expected in late 2024 or 2025. These bellwether trials will test the legal theories and may result in verdicts that shape settlement negotiations. If juries return substantial verdicts against the companies, it could prompt settlement discussions similar to those in other mass tort litigations. If the companies prevail, it may embolden them to fight cases individually rather than settle globally.
Legislative efforts are also advancing. Several states have passed or are considering laws that would impose design requirements on social media platforms to protect minors, restrict data collection on children, or create liability for harms caused by addictive design features. The tech companies are fighting these laws in court, arguing they violate the First Amendment and are preempted by federal law, but some laws have survived initial legal challenges and may take effect in the coming years.
At the federal level, there is rare bipartisan support for increased regulation of social media platforms, particularly regarding child safety. The Kids Online Safety Act has been introduced in multiple sessions of Congress and would require platforms to provide minors with safeguards and limit features known to cause compulsive use. The tech industry has lobbied against it intensively, but public pressure has been building, particularly as more internal documents have become public through litigation and whistleblower disclosures.
What This Means
If your child developed depression or anxiety or an eating disorder or started harming themselves, and you have spent years wondering what you did wrong or what you missed, the internal documents are clear: this was not your fault. These companies studied adolescent psychology, identified vulnerabilities, and designed their products to exploit those vulnerabilities for profit. They knew young people were being harmed. They had the research in front of them, written by their own employees, quantifying the damage in percentages and millions of affected users. And they chose not to act, because acting would have meant reducing engagement, which would have meant reducing revenue.
The harm was not an accident or an unforeseeable side effect of new technology. It was the result of specific design decisions made by executives and engineers who had access to research showing what those decisions would do to young minds. The infinite scroll, the variable rewards, the social comparison metrics, the algorithmic recommendations of harmful content, these features exist because they maximize time spent on the platform, and the companies knew that maximizing time on the platform meant damaging the mental health of millions of adolescents. They made a business calculation that the profit was worth the harm. They put that calculation in writing, in internal presentations and emails and research reports. And then they told the public, and Congress, and parents, and doctors that their platforms were safe.
What happened to your child, or to you, was not a personal failing. It was a documented business decision, made by people who knew better and chose profit anyway. The path forward may be long, and it may involve litigation and legislation and years of advocacy, but the truth is no longer hidden. They knew. We know they knew. And that knowledge changes everything.