You started noticing the changes gradually. Your daughter spent more time alone in her room, the blue light from her phone visible under the door late into the night. She stopped eating breakfast, started picking at dinner. When you asked about school, about friends, she said everything was fine, but her eyes told a different story. The pediatrician diagnosed depression and anxiety. Maybe an eating disorder. She was fourteen years old.
You wondered what you had missed. Was it something you said, something you failed to provide? You thought about your parenting, your family history, whether this was somehow genetic. The therapist asked about social media use. Six, seven, sometimes eight hours a day across Instagram, TikTok, and Snapchat. You thought that was just what teenagers did now. That it was normal. That every kid was online that much.
What you did not know, what your doctor did not know, what none of us were told is that the companies running these platforms had conducted extensive internal research showing their products were causing psychological harm to minors. They knew the mechanisms. They measured the damage. And they made deliberate design choices to maximize engagement anyway, even when their own researchers warned that those choices were destroying the mental health of millions of children.
What Happened
The injuries are not subtle. Young people describe a constant, gnawing anxiety that follows them through every waking moment. They check their phones compulsively, sometimes hundreds of times per day, driven by a fear of missing out that feels physically painful. They lose sleep, sometimes sleeping only four or five hours a night because they cannot stop scrolling. The exhaustion compounds into everything else.
The depression often starts with comparison. Every photo they see has been filtered, edited, perfected. Every life looks better than theirs. They begin to feel worthless, inadequate, ugly. For many, this progresses into self-harm. Cutting, burning, hitting themselves as a way to feel something other than the numbness or to punish themselves for not measuring up. Others develop eating disorders, starving themselves or purging in pursuit of bodies that match what they see in their feeds.
Parents watch their children disappear into devices. The kids who used to talk, who used to laugh, who used to be present become hollow versions of themselves. They are sitting right next to you but they are somewhere else entirely, trapped in an endless cycle of scrolling, posting, checking for likes, reading comments, comparing themselves, feeling worse, and scrolling more to escape the feeling.
Some young people have suicidal thoughts. Some attempt suicide. Some die. The psychiatric emergency rooms have seen the surge. Therapists have waiting lists months long. This is not a small problem affecting a vulnerable few. This is a generation-wide mental health crisis, and it accelerated precisely when smartphone-based social media became ubiquitous in the lives of children.
The Connection
These platforms were engineered to be addictive. That is not metaphor or exaggeration. It is the technical term used by the people who built them. The companies employed behavioral psychologists, neuroscientists, and addiction specialists to design features that would maximize what they called engagement but what is more accurately described as compulsive use.
The core mechanism is variable reward scheduling, the same psychological principle that makes slot machines addictive. When you pull down to refresh your feed, you do not know what you will see. Sometimes it is interesting, sometimes it is boring, sometimes it is thrilling. That unpredictability triggers dopamine release in the brain. Your brain learns to crave the next pull, the next refresh, the next hit of potential reward.
The like button, which Facebook introduced in 2009 and which spread across every platform, creates a quantified measure of social approval. For adolescents, whose brains are still developing and who are neurologically hypersensitive to peer evaluation, this is particularly destructive. A 2016 study published in Psychological Science used fMRI brain scans to show that when teenagers saw photos with many likes, the reward centers in their brains lit up intensely. The same regions associated with addiction to drugs.
Push notifications were designed to interrupt whatever you were doing and pull you back into the app. Snapchat introduced streaks in 2015, which required users to exchange messages daily or lose their streak count. This created an obligation, a sense that you had to check the app or you would lose something valuable. For young users, maintaining streaks became a source of anxiety and a driver of compulsive checking.
Autoplay video, which TikTok perfected and which Instagram copied with Reels, eliminates stopping points. You never have to make a decision to watch the next video. It just plays. Before you realize it, an hour has passed. Studies on binge-watching behavior, including research published in the Journal of Behavioral Addictions in 2017, have documented how autoplay features override natural stopping cues and promote excessive use.
The algorithms that select what content to show were optimized for one metric: time spent on the platform. Internal documents show engineers were told to maximize daily active users and time on site. The algorithms learned that content triggering strong emotions, especially anger, anxiety, and envy, kept people scrolling longer. So that is what the algorithms promoted. A 2021 study in PNAS found that content expressing moral outrage spreads faster and further on social media than neutral content, and the platforms had designed their systems to amplify exactly this type of material.
For young girls specifically, the algorithms learned to promote content about dieting, extreme exercise, and body modification. Internal research at Instagram, conducted in 2019 and reported in 2021 by the Wall Street Journal, found that the platform's algorithm actively pushed users interested in dieting toward more extreme content about eating disorders. Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made it worse. The platform knew this. The algorithm did it anyway.
What They Knew And When They Knew It
Facebook, which rebranded as Meta in 2021, began conducting internal research on teen mental health and platform addiction as early as 2017. These were not small studies. The company surveyed tens of thousands of users across multiple countries. The research teams produced slide decks, memos, and presentations that circulated among executives and product managers.
In 2019, Facebook researchers produced an internal presentation titled Teens and Body Image. The research found that among teens who reported suicidal thoughts, 13 percent of British users and 6 percent of American users traced the issue to Instagram. The research stated directly: We make body image issues worse for one in three teen girls. The researchers documented that teens blamed Instagram for increases in anxiety and depression. This was not a small side effect. The research stated: These issues are particularly serious for teen girls.
Another internal study from 2019, revealed in documents provided to Congress and the Securities and Exchange Commission by whistleblower Frances Haugen in 2021, found that 32 percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. The research found that teens who struggled with mental health issues were aware that Instagram was contributing to their problems but felt unable to stop using it. The word used in the research was addiction.
In March 2020, Facebook researchers produced a study called Social Comparison on Instagram. The research documented that Instagram's focus on body and lifestyle was harmful to many users, particularly teen girls. The study found that social comparison is worse on Instagram than on other social media platforms. The researchers attributed this to Instagram's focus on body and lifestyle content, its algorithm's promotion of aspirational content, and features like the like count that quantified popularity.
The research teams proposed changes. They suggested reducing the visibility of like counts, changing how the algorithm promoted appearance-based content to young users, and creating tools to help users track and limit their time on the platform. Many of these proposals were rejected or implemented only in limited forms. Internal communications show that executives worried these changes would reduce engagement and therefore reduce revenue.
TikTok, owned by Chinese company ByteDance, has released fewer internal documents, but evidence has emerged through litigation discovery and reporting. A 2020 internal report examined by the Wall Street Journal found that TikTok knew its algorithm could push users, particularly young users, into rabbit holes of increasingly extreme content within minutes of opening the app. The system was designed to identify user interests based on watch time and then flood the feed with similar content.
Internal communications at TikTok, revealed in litigation filed in 2022, show employees discussing the addictive nature of the platform openly. Engineers referred to features designed to maximize time on platform and discussed metrics for compulsive use. The company tracked what it called time to first open, measuring how quickly after waking up users would open the app. Product managers celebrated when this metric decreased, meaning users were checking TikTok earlier in their morning routine.
Snapchat, created by Snap Inc., introduced the streaks feature in 2015 despite internal discussions about whether it would create unhealthy pressure on young users. Documents produced in litigation show that product teams discussed how streaks would increase daily active usage, particularly among teenagers. The feature was designed specifically to create fear of loss. If you did not send a snap to your streak partners within 24 hours, you would lose the streak and the number counting how many days you had maintained it.
By 2018, Snapchat had data showing that streaks were a significant source of anxiety for teen users. Some teens reported feeling obligated to give friends their passwords if they were going to be unable to access the app, so the friend could maintain their streaks. Despite this data, the company expanded the feature and made streak counts more prominent in the interface.
All three companies had research teams studying problematic use, which is the clinical term for behavioral addiction. Meta had a team called the Problematic Use Team. Internal emails show this team found that certain users exhibited symptoms of addiction, including failed attempts to reduce use, loss of interest in other activities, and continued use despite awareness of harm. The research quantified this. Approximately 5 to 10 percent of users met criteria for problematic use, but among teens the percentage was higher.
How They Kept It Hidden
The first line of defense was simply not publishing the research. These were internal studies, conducted by employees, stored on company servers. There was no legal requirement to make them public. Meta, TikTok, and Snapchat conducted thousands of hours of research into user behavior and platform effects. Almost none of it was published in peer-reviewed journals where independent scientists could examine the methods and findings.
When outside researchers requested data to study platform effects on mental health, the companies routinely denied access. Academic researchers who wanted to study Instagram's effect on body image or TikTok's algorithmic recommendations were told the data was proprietary. This meant that for years, the only entities with comprehensive data about how these platforms affected users were the platforms themselves, and they were not sharing.
The companies funded external research, but they funded studies designed to show positive effects or null results. Meta gave grants to academic researchers studying social connection and community building. TikTok funded research on creative expression. These studies were real and the researchers were legitimate, but the funding created a selection bias. Research that might show harm was less likely to be funded, less likely to get access to data, and less likely to be promoted by the companies.
When research showing harm did emerge from independent sources, the companies deployed public relations teams to discredit it. After a 2017 study in the Journal of Adolescent Health found associations between social media use and depression, Facebook published blog posts questioning the methodology and highlighting other studies that found no effect. The company did not mention that it had internal research confirming the association.
Industry groups provided another layer of concealment. Meta, TikTok, and Snapchat are members of trade associations like TechNet and the Computer and Communications Industry Association. These groups lobby regulators and legislators, often arguing that concerns about social media and mental health are overblown or that the research is inconclusive. The groups present themselves as neutral industry voices, but they are funded by the platforms and they advocate for the platforms' interests.
The companies also used legal agreements to prevent information from spreading. When they settled lawsuits brought by users or families, the settlements typically included non-disclosure agreements. The plaintiffs were paid in exchange for silence. This meant that evidence of harm that emerged in discovery, including internal documents and depositions, was sealed and could not be used to warn other users or inform public debate.
When employees raised concerns internally, they were often reassigned or marginalized. Frances Haugen, the Facebook whistleblower, described a culture where researchers who found negative results were sidelined. Their work was not presented to executive leadership or was presented with caveats that minimized the findings. Product decisions were driven by engagement metrics, not by safety research.
Perhaps most effectively, the companies framed the issue as one of parental responsibility. Their public messaging emphasized tools that parents could use to monitor and limit their children's use. This shifted blame away from platform design and onto families. If your child was spending six hours a day on Instagram, the implicit message was that you as a parent had failed to set appropriate limits. The companies did not mention that their products were designed by addiction experts to be nearly impossible to resist.
Why Your Doctor Did Not Tell You
Pediatricians and family doctors were seeing the mental health crisis in their exam rooms. Rates of depression, anxiety, self-harm, and suicidal ideation among adolescents began rising sharply around 2010 and accelerated after 2012. Doctors saw the patients, wrote the prescriptions for antidepressants, made the referrals to therapists. But most did not connect it to social media because they did not have access to the research showing causation.
The internal studies conducted by Meta, TikTok, and Snapchat were not published in medical journals. They were not presented at pediatric conferences. They were not included in continuing medical education courses. When doctors read the available medical literature on social media and mental health, they found a mix of correlational studies with conflicting results. Some showed associations between social media use and poor mental health outcomes. Others showed no effect or even positive effects. Without access to the platforms' internal data, it was impossible to see the full picture.
Medical education moves slowly. Doctors learn about new health risks primarily through peer-reviewed publications, professional guidelines, and updates from organizations like the American Academy of Pediatrics. The social media companies ensured that their internal research did not enter these channels. By the time independent researchers had gathered enough data to establish clear patterns, millions of children had already been affected.
Additionally, the companies' public messaging influenced medical opinion. Meta published blog posts citing research that found social media helped teens feel more connected and supported. TikTok promoted stories of users who found mental health support and community on the platform. Snapchat emphasized ephemeral messaging as less pressure-filled than permanent posts. These narratives reached doctors, often through the same channels where they consumed other news and information.
Many physicians, particularly those who did not have teenagers themselves or who were not heavy social media users, underestimated how fundamentally different these platforms were from earlier forms of media. They thought about social media as something like television or telephone calls, just delivered through a different device. They did not understand the algorithmic curation, the variable reward schedules, the quantified social feedback, or the designed addictiveness that made these platforms categorically different from previous communication technologies.
When doctors did ask about social media use, they often received reassuring answers. Teens would say they used it to stay in touch with friends, which was true but incomplete. They did not necessarily recognize or articulate that they were also experiencing anxiety from constant comparison, obligation from maintaining streaks, or compulsive checking behavior. They thought this was normal because everyone they knew was experiencing the same thing.
Professional medical organizations have begun updating their guidance. In 2023, the American Psychological Association issued recommendations for adolescent social media use, advising that use should be limited and monitored, particularly in early adolescence. But this was more than a decade after the problems began, and it came only after independent researchers, whistleblowers, and journalists had forced the internal research into public view. Your doctor was not hiding information from you. Your doctor did not have the information because the companies that did have it kept it locked away.
Who Is Affected
If your child used Instagram, TikTok, or Snapchat regularly during adolescence, particularly during the vulnerable years between ages 11 and 16, they were exposed. Regular use typically means daily access, scrolling through feeds, posting content, or checking notifications. The threshold is not extreme. Research shows effects beginning at about two to three hours per day, but for many young people, use was far higher.
The injuries are most common and most severe in girls and young women, though boys and young men are affected as well. Internal research from Meta showed that the body image and social comparison effects were strongest in girls, which aligns with broader psychological research on adolescent development. But boys experienced different forms of harm, including social anxiety, fear of missing out, and sleep disruption from late-night use.
If your child developed depression, anxiety, an eating disorder, or engaged in self-harm during the years they were actively using these platforms, there is a reasonable possibility the platform use contributed to or caused the mental health condition. This is especially true if mental health symptoms began or worsened in connection with increased social media use, if your child talked about feeling bad after using social media, or if they seemed unable to reduce their use despite wanting to.
Many families describe a pattern. The child got a smartphone, usually around age 11 or 12. They opened accounts on Instagram, Snapchat, and later TikTok. Use increased gradually. Within a year or two, the child seemed different. More withdrawn, more anxious, more focused on appearance, less interested in activities they used to enjoy. Mental health symptoms emerged, sometimes gradually, sometimes suddenly. By the time parents realized something was seriously wrong, the child had been using the platforms heavily for months or years.
Young adults who used these platforms as teenagers and who now struggle with mental health issues may also be affected. The injuries do not always appear immediately. Some people develop depression or anxiety during their teen years. Others seem fine as adolescents but struggle in their late teens or twenties, after years of exposure to social comparison, algorithmic manipulation, and compulsive use patterns that disrupted normal development.
The exposure period that matters most is approximately 2012 to the present. This is when smartphone ownership became nearly universal among American teenagers and when the platforms implemented their most aggressively addictive features. Instagram introduced the algorithmic feed in 2016. TikTok launched internationally in 2017 with an algorithm designed for maximum engagement. Snapchat added streaks in 2015 and expanded other features designed to increase daily active use. If your child was an adolescent anytime during these years and used these platforms, they were exposed during the highest-risk period.
Where Things Stand
Hundreds of families have filed lawsuits against Meta, TikTok, and Snapchat. The cases allege that the companies designed addictive products, that they knew these products caused mental health harm in minors, and that they failed to warn users and parents about the risks. The litigation is in relatively early stages, with most cases filed between 2022 and 2024.
In October 2023, dozens of states filed lawsuits against Meta, alleging that Instagram was designed to addict children and that the company misled the public about safety. The complaints cited internal documents showing Meta knew about the harm and chose not to act. These cases are proceeding in federal court, consolidated in multidistrict litigation in the Northern District of California.
School districts have also filed suits, claiming the platforms created a public nuisance by causing a youth mental health crisis that has burdened school resources. Hundreds of districts across the country have joined these cases, arguing they have had to hire additional counselors, implement mental health programs, and respond to increases in student mental health emergencies, all due to social media platform design.
Individual personal injury cases have been filed by families whose children died by suicide or suffered severe mental health crises. These cases seek damages for wrongful death, negligence, and product liability. They argue that the platforms are defective products because they cause harm that outweighs their utility, at least as designed for use by minors.
The legal landscape is complicated by Section 230 of the Communications Decency Act, a federal law that has historically provided broad immunity to online platforms for content posted by users. The companies argue that the lawsuits are attempting to hold them liable for user-generated content, which Section 230 prohibits. Plaintiffs argue that the cases are about product design—the addictive features, the algorithms, the interface choices—not about content, and that Section 230 does not protect design choices.
Courts are beginning to rule on these questions. Some judges have allowed cases to proceed, finding that claims based on platform design features are not barred by Section 230. Other judges have dismissed cases, finding that the claims are insufficiently separated from content issues. The law is evolving in real time as courts grapple with applying a statute written in 1996 to platforms that did not exist when the law was passed.
No major settlements have been announced yet, but the litigation is moving through discovery. This means plaintiffs' attorneys are obtaining internal documents, deposing company employees, and building records of what the companies knew and when. The internal research that has become public so far—the studies about teen mental health, the memos about addictive features—came largely from whistleblowers and journalists. Discovery will likely produce much more.
New cases can still be filed. Statutes of limitations vary by state but generally allow claims to be brought for several years after the injury occurs or after the plaintiff discovers the cause of the injury. For minors, many states toll the statute of limitations until the child turns 18, meaning young adults who were injured as teenagers may have until their early twenties to file claims.
The litigation will likely take years to resolve. These are complex cases against well-funded defendants who have every incentive to fight. But the legal path has been cleared before, with tobacco, opioids, and other products where internal documents showed companies knew about harm and concealed it. The framework exists. What matters now is whether the evidence is strong enough and whether courts are willing to apply existing product liability and negligence law to social media platforms.
What This Means
You have spent time wondering what you did wrong. Whether you should have seen the signs earlier, intervened sooner, been a different kind of parent. You have asked yourself whether there was something about your family, your genes, your home that made your child vulnerable. You have carried guilt that was never yours to carry.
What happened to your child was not an accident. It was not bad luck. It was not a failure of parenting or a genetic predisposition that could not have been avoided. It was the result of specific design decisions made by engineers and executives who had data showing their products caused harm to children and who chose to prioritize engagement and revenue over safety. They knew. The documents prove they knew. And they did it anyway.
Your child was targeted by some of the most sophisticated behavioral psychologists and software engineers in the world, working for companies with billions of dollars in resources, designing systems intended to be irresistible. The fact that your child could not resist, the fact that they spent hours on these platforms even when it made them miserable, is not a moral failure. It is the designed outcome. They were supposed to be unable to stop. That was the point.