Your daughter stopped eating dinner with the family two years ago. She takes her plate upstairs, says she has homework, but you can hear the TikTok sounds through her door until 2am. Her grades dropped. She started wearing long sleeves in summer. When you finally got her into therapy, the psychologist said depression and anxiety, maybe an eating disorder developing, and you blamed yourself for not seeing it sooner. You thought about what you did wrong as a parent, whether you pushed too hard or not hard enough, whether this was somehow genetic or just the terrible luck of adolescence.
Your son is sixteen and has not looked you in the eye during a conversation in eighteen months. He comes down for meals silent, eats with his phone in his hand, scrolls through Instagram Reels between bites. His pediatrician noticed the weight loss and the dark circles and asked about sleep. Your son said he sleeps fine, but you know he is on Snapchat until dawn because you can see the screen glow under his door. The doctor suggested a therapist. The therapist suggested reducing screen time. Your son said he would try, but nothing changed. You wondered if you failed to set boundaries early enough, if you gave in too easily when he was twelve and everyone else had a phone.
You assumed this was a parenting problem, a discipline problem, maybe a mental health crisis with roots in your family history or the stress of modern childhood. What you did not know, what your doctors did not know, what almost no one outside a conference room in Menlo Park or Beijing knew, was that teams of engineers and psychologists had spent fifteen years designing these platforms to be addictive. They measured their success in time spent, in dopamine hits, in the inability of minors to put the phone down. And when their own research showed the harm, they buried it.
What Happened
Social media addiction in minors looks like a progressive loss of control. It starts as normal teenage behavior, checking Instagram between classes or scrolling TikTok before bed. But the usage increases steadily. An hour becomes three becomes six. Sleep schedules collapse because the algorithm serves up perfectly tailored content at 1am, 2am, 3am, each video engineered to trigger the next.
The mental health effects show up within months. Anxiety spikes because social comparison is now quantified in likes and comments and follower counts. Depression sets in because the curated perfection on every feed makes ordinary life feel inadequate. Self-harm increases because the algorithms, trying to maximize engagement, serve vulnerable teenagers content about cutting, suicide methods, extreme dieting. Eating disorders develop because the beauty standard content is relentless and the apps learn which body images keep each specific user scrolling.
Parents describe children who cannot stop checking their phones during dinner, during family events, during conversations. Teachers report students who seem physically present but mentally elsewhere, hands twitching toward pockets every few minutes. Therapists see adolescents who recognize the problem, who genuinely want to stop, but cannot. The platform pulls them back within hours. This is not a failure of willpower. This is the intended result of behavioral design.
The physical symptoms include sleep deprivation, weight loss or gain, repetitive strain injuries in thumbs and wrists. The psychological symptoms include persistent anxiety, major depression, body dysmorphia, social withdrawal, inability to concentrate on non-digital tasks. The behavioral symptoms include lying about usage, hiding phones, rage responses when parents try to restrict access, and continued use despite clear negative consequences. These are the diagnostic criteria for addiction, the same patterns seen with gambling or substances, and they are appearing in children as young as ten.
The Connection
Social media platforms cause addiction through intentional manipulation of dopamine systems in the developing adolescent brain. The mechanism is straightforward and the companies understand it completely.
Every social media platform uses variable reward schedules, the same psychological tool that makes slot machines addictive. A teenager posts a photo and does not know if it will get five likes or five hundred. That uncertainty triggers dopamine release in the nucleus accumbens, the brain region associated with reward and motivation. The reward is variable and unpredictable, which creates compulsive checking behavior. The adolescent brain is particularly vulnerable because the prefrontal cortex, responsible for impulse control and long-term thinking, does not fully develop until age twenty-five.
A 2017 study published in the journal Psychological Science demonstrated that social media likes activate the same brain reward circuits as eating chocolate or winning money. Researchers at UCLA used fMRI imaging to watch adolescent brains light up when viewing photos with many likes. The effect was strongest in teenagers aged thirteen to sixteen.
The infinite scroll feature eliminates natural stopping points. Before infinite scroll, a user reached the end of a feed and had a moment to decide whether to continue. Infinite scroll removes that decision point. The content loads automatically, endlessly. Aza Raskin, the designer who invented infinite scroll in 2006, has since called it one of his greatest regrets, estimating it costs humanity 200,000 lifetimes of attention every day.
The algorithms prioritize content that generates engagement, and internal research shows that negative emotions drive more engagement than positive ones. A 2018 internal Facebook study, later leaked, found that content evoking anger generates six times more engagement than other emotions. For teenagers, this means the algorithm learns to serve content that makes them feel inadequate, anxious, or outraged, because those emotions keep them scrolling.
Push notifications are timed to maximize return visits. The platforms use machine learning to determine exactly when each user is most vulnerable to returning. A notification might arrive during homework time, during dinner, during the brief window when a teenager is trying to sleep. The timing is not random. It is calculated to break concentration and trigger compulsive checking.
The developing adolescent brain is particularly susceptible to these mechanisms. Research published in Nature Communications in 2019 showed that early adolescence is a period of heightened sensitivity to social feedback. Teenagers are neurologically wired to care intensely about peer opinion, and social media quantifies peer opinion in real-time metrics that earlier generations never experienced. The platforms exploit this developmental vulnerability deliberately.
What They Knew And When They Knew It
Facebook, now Meta, knew its platforms harm teenage mental health by 2019 at the latest, and likely earlier. Internal research conducted by Facebook researchers between 2019 and 2021, disclosed in the Facebook Files by whistleblower Frances Haugen in September 2021, showed that Instagram was toxic for teenage girls.
One internal Facebook slide from 2019 stated that thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse. Another internal document noted that among teens who reported suicidal thoughts, thirteen percent of British users and six percent of American users traced the desire to kill themselves to Instagram. A third study found that teenagers blamed Instagram for increases in anxiety and depression, and that the problem was not usage time but the nature of the platform itself.
Facebook researchers presented these findings to executives including Adam Mosseri, head of Instagram, in March 2020. The presentation included data showing that social comparison is worse on Instagram than on other platforms like TikTok or Snapchat because Instagram focuses on bodies and lifestyle. The researchers recommended changes to reduce harmful content in teen feeds. Those changes were not implemented.
Instead, Facebook continued developing Instagram Youth, a version of the platform for children under thirteen, until public pressure forced the project to pause in September 2021. Internal communications showed executives were aware of the mental health risks while simultaneously trying to expand access to younger children.
TikTok knew about compulsive usage and mental health harms through its own internal research. A 2020 internal TikTok document, leaked in 2023, showed that company engineers understood the app could become addictive within thirty-five minutes of use. The document described the ideal user experience as entering a flow state where they lose track of time and scroll compulsively. Another internal study found that the recommendation algorithm could push vulnerable users toward suicide and self-harm content within a matter of hours.
ByteDance, TikTok parent company, operated a version of TikTok in China called Douyin that included mandatory usage limits for minors. Users under fourteen were limited to forty minutes per day and could not access the app between 10pm and 6am. TikTok implemented no such protections on its international version. The company knew how to protect children and chose not to apply those protections outside China.
Snapchat knew about the risks of its platform design by 2018. Internal communications revealed in litigation showed that Snap Inc. executives were aware that features like Snapstreaks, which reward users for sending snaps to the same person for consecutive days, created compulsive usage patterns. Teenagers reported feeling obligated to maintain streaks even when they wanted to stop using the app. Snap executives discussed the addictive nature of streaks in internal emails and decided to expand the feature rather than modify it.
A 2019 Snapchat internal study found that teenage users reported increased anxiety around maintaining friendships on the platform and fear of missing out on social events documented in stories. The study recommended changes to reduce social comparison pressure. Those recommendations were not implemented. Instead, Snap introduced Spotlight, a TikTok-style infinite scroll video feed, in November 2020, adding another addictive feature.
All three companies employed teams of experts in persuasive technology and behavioral psychology. They studied slot machine design, casino tactics, and behavioral addiction literature. They measured dopamine responses. They ran thousands of A/B tests to determine which features maximized time spent. They knew they were designing for addiction because addiction was the business model. User engagement determined advertising revenue, and addiction is maximized engagement.
How They Kept It Hidden
The social media companies used several overlapping strategies to prevent public understanding of the harms they were causing.
First, they classified internal research as confidential business information. The studies showing mental health harms were marked attorney-client privileged or trade secret. Employees who conducted the research were required to sign non-disclosure agreements. When Facebook researcher Sophie Zhang tried to speak publicly about platform harms in 2020, the company threatened legal action. The internal research only became public through whistleblowers who risked their careers and legal liability to release the documents.
Second, the companies funded external research designed to produce favorable results. Meta gave millions in grants to academic researchers through programs like the Facebook Open Research and Transparency initiative. Grant agreements often gave Facebook access to research data and advance notice of findings. Studies that showed platform benefits were promoted widely. Studies that showed harms were rarely published or publicized. Researchers who wanted continued funding learned what results were acceptable.
Third, the companies deployed public relations campaigns emphasizing parental responsibility and digital literacy. When concerns about teen mental health appeared in media, company spokespeople redirected attention to parental supervision and education. The message was always that the platforms were neutral tools and any problems arose from misuse. This rhetoric shifted responsibility from the companies that designed addictive systems to the parents and children affected by them.
Fourth, the companies used trade associations and lobbying groups to fight regulation. TechNet, the Internet Association, and other industry groups spent hundreds of millions lobbying against restrictions on data collection, algorithm transparency, or design features targeting minors. When the United Kingdom proposed an Online Safety Bill requiring platforms to protect children from harmful content, tech industry lobbying delayed and weakened the legislation for three years.
Fifth, the companies made cosmetic changes while preserving the addictive core design. Instagram introduced a take a break reminder feature in 2021 that gently suggested users consider closing the app. The feature was optional, easy to dismiss, and did not interrupt the infinite scroll. Meta announced default private accounts for users under sixteen in 2023 but did not change the fundamental algorithmic amplification of harmful content. These changes allowed executives to claim they were addressing concerns while maintaining the business model unchanged.
Sixth, the companies settled individual lawsuits quietly with non-disclosure agreements. Families whose children died by suicide after social media-related mental health crises sometimes received settlements in exchange for never speaking publicly about what happened. These NDAs prevented public awareness of the pattern of harm.
Why Your Doctor Did Not Tell You
Most pediatricians and family physicians did not warn parents about social media addiction risks because they did not have access to the internal research showing those risks. Medical education and continuing education programs teach doctors to recognize substance addiction and behavioral disorders, but social media addiction was not included in standard diagnostic manuals until recently.
The social media companies did not share their internal research with public health authorities or medical professionals. While tobacco companies were eventually required to fund smoking cessation programs and pharmaceutical companies must report adverse events to the FDA, social media platforms operated with no such requirements. Doctors learned about platform harms the same way parents did, through media coverage and individual patient experiences, years after the companies documented the problems internally.
When doctors did see the mental health effects in their patients, the connection to social media was not always obvious. A teenager presenting with depression and anxiety could have many causes. Sleep deprivation, social withdrawal, and poor academic performance are symptoms of various conditions. Without knowing about the deliberate addictive design, physicians often treated the symptoms as a primary mental health disorder rather than recognizing social media addiction as the underlying cause.
The American Academy of Pediatrics did not issue comprehensive guidelines on social media use until 2016, and those guidelines focused on screen time limits rather than platform-specific harms. Updated guidance in 2023 acknowledged the mental health risks but still emphasized parental mediation rather than platform accountability. Physicians were essentially told to recommend moderation without being given information about why moderation was neurologically difficult for adolescents using platforms designed to prevent it.
Medical journals published some studies on social media and mental health correlations, but the internal company research showing causation and intent remained hidden. A doctor reading the available literature in 2018 or 2019 might conclude that social media was associated with depression in some teens but not understand that the association was the result of deliberate design choices to maximize addictive engagement.
Who Is Affected
The lawsuits currently being filed focus on minors who developed diagnosed mental health conditions while using Meta platforms including Facebook and Instagram, TikTok, or Snapchat. Here is what qualifying typically looks like.
Your child used one or more of these platforms regularly during adolescence, usually starting between ages ten and seventeen. Regular use generally means daily access for at least several months, though the specific timeframe varies. The child does not need to have used the platform for years. Internal company research showed that addictive patterns could develop within weeks of regular use.
Your child developed a documented mental health condition that began during or worsened significantly during the period of social media use. The qualifying conditions include major depressive disorder, anxiety disorders, eating disorders including anorexia and bulimia, body dysmorphic disorder, and self-harm behaviors. The condition needs to be documented by a healthcare provider, which means there should be medical records showing diagnosis and treatment. This could be a therapist, psychologist, psychiatrist, pediatrician, or other treating physician.
The most serious cases involve self-harm or suicide attempts that required medical intervention. Emergency room visits, psychiatric hospitalizations, and residential treatment programs all create the kind of documentation that strengthens a case. But you do not need a hospitalization to qualify. Ongoing outpatient therapy, prescription medications for depression or anxiety, and documented diagnoses in medical records may be sufficient.
The timeline matters. The mental health condition should have developed or significantly worsened during the period of platform use. A child who had pre-existing depression that became severe after starting Instagram would potentially qualify. A child who had no mental health history and developed anorexia after six months of daily TikTok use would potentially qualify. The question is whether the platform use coincided with and contributed to the mental health decline.
Your child does not need to have been formally diagnosed with social media addiction, though some cases involve that specific diagnosis. The addiction itself is often evident in behavioral patterns even without a formal diagnosis. If your child was unable to stop using the platform despite wanting to, if usage interfered with school or family or sleep, if your child became distressed when prevented from accessing the platform, those are indicators of addictive use.
Parents of children who died by suicide are filing wrongful death cases. These cases involve teenagers whose mental health deteriorated while using the platforms and who ultimately took their own lives. Many of these families found evidence on their children devices showing that the teens had been exposed to suicide-related content, pro-anorexia content, or self-harm content through the platform algorithms.
Young adults who used the platforms as minors and continue to experience mental health effects are also affected. Someone who is now twenty-two but developed an eating disorder at fifteen while using Instagram may qualify. The key is that the exposure and the initial harm occurred during adolescence when the brain was most vulnerable.
The platforms involved are primarily Meta properties including Facebook and Instagram, TikTok, and Snapchat. YouTube is included in some cases, particularly cases involving the YouTube recommendation algorithm serving harmful content to minors. Other platforms may be added as litigation develops.
Where Things Stand
The social media addiction litigation is in early stages but moving quickly. The first cases were filed in late 2022 and early 2023 by individual families whose children suffered severe mental health crises or died by suicide. As of 2024, hundreds of cases have been filed in federal and state courts across the country.
In October 2023, over 200 cases were consolidated into a multidistrict litigation in the Northern District of California. The MDL, officially titled In re Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, is being overseen by Judge Yvonne Gonzalez Rogers. This consolidation allows the cases to move through discovery together, which means plaintiffs can share evidence and the defendants cannot give different answers to the same questions in different courts.
The defendants filed motions to dismiss in early 2024, arguing that Section 230 of the Communications Decency Act protects them from liability for content on their platforms. Section 230 generally says that online platforms are not liable for content posted by users. However, the plaintiffs argue that these cases are not about user-generated content but about product design. The addictive features like infinite scroll, variable rewards, push notifications, and algorithmic amplification are design choices made by the companies, not content created by users. As of mid-2024, courts have allowed many of these cases to proceed past the motion to dismiss stage, finding that product liability claims are not barred by Section 230.
Discovery is underway, which means plaintiffs attorneys are obtaining internal company documents, emails, research studies, and other evidence. This process will likely take through 2024 and into 2025. The internal documents that have already been made public through whistleblowers like Frances Haugen provide a roadmap for what additional evidence exists within company files.
Bellwether trials, where a small number of representative cases go to trial to help both sides understand case value, are expected in late 2025 or 2026. These trials will be closely watched because they will be the first time juries hear evidence about what the companies knew and when they knew it.
Several state attorneys general have also filed cases against Meta and other platforms for violating consumer protection laws and targeting minors. A coalition of over 40 states filed suit in October 2023 alleging that Meta knowingly designed Instagram to addict children and deceived the public about the risks. These government cases proceed on a separate track from the individual injury cases but strengthen the overall narrative of corporate wrongdoing.
No settlements have been reached yet in the individual injury cases, though some families have settled individual claims under confidential terms. The companies have strongly defended the cases and shown no indication of broad settlement discussions as of 2024. However, the legal landscape can change quickly as discovery reveals more damaging evidence.
New cases are still being filed and accepted by attorneys. The statute of limitations varies by state but generally runs for one to three years from when the injury was discovered or should have been discovered. For minors, the statute of limitations often does not begin until they turn eighteen, which means young adults can file cases for harms that occurred during their adolescence.
The litigation is expected to continue for several years. Mass tort cases involving corporate knowledge of product dangers typically take five to seven years from initial filing to widespread resolution. Given that these cases began in 2022 and 2023, significant movement toward settlements or trials is expected between 2025 and 2028.
What your daughter experienced was not a personal failing. What your son is going through is not a parenting mistake. The sleeplessness, the anxiety, the compulsive checking, the depression, the disordered eating, all of it was the designed outcome of systems built by people who measured their success by your children inability to stop. They knew it was happening. They studied it. They refined it. And when their own research told them they were harming adolescent mental health, they kept that research secret and expanded the features causing the harm.
The engineers and psychologists who built these systems were not trying to help teenagers connect with friends or express creativity. They were trying to maximize daily active users and time spent, because that is what generated revenue. Your child was not the customer. Your child was the product, and the harm done to your child was a business decision made in conference rooms by people who knew exactly what they were doing. That is not speculation. That is what the documents show. And that is why this litigation exists.