Is social media not just bad, but illegally bad? Should tech companies pay for making it that way? According to two US juries — and no shortage of outside commentary — the answer to both questions is “yes.”
Earlier this week, two juries — one in New Mexico, one in Los Angeles — held Meta liable for a total of hundreds of millions of dollars for harming minors. YouTube was also found liable in Los Angeles, and both companies are appealing their losses. In one sense, the decisions were surprising. Meta and Google operate platforms for transmitting speech and are typically protected in a variety of ways by Section 230 and the First Amendment; it’s unusual for suits to clear these hurdles. In another, it feels inevitable. The web of 2026 has become almost synonymous with a few widely disliked for-profit platforms, and the harm they’ve caused is often tangible — but it’s still far from certain what this defeat will change, and what the collateral damage could be.
If these decisions survive appeal — which isn’t certain — the direct outcome would be multimillion-dollar penalties. Depending on the outcome of several more “bellwether” cases in Los Angeles, a much larger group settlement could be reached down the road. Even at this early stage, it’s a victory for a legal theory that social media platforms should be treated like defective products — a strategy designed to get around the shield of Section 230, but one that’s often failed in court. “The California case specifically is the first time social media has ever had to face the staredown and judgment of a jury for specific personal injuries,” attorney Carrie Goldberg, who pushed forward major early social media liability suits, including an unsuccessful case against Grindr, told The Verge. “It’s the dawn of a new era.”
“It’s the dawn of a new era.”
For many activists, the overall goal is to make clear that lawsuits will keep piling up if companies don’t change their business practices. What practices? In New Mexico, a jury was swayed by arguments that Meta had made statements misleading users about the safety of its platforms. In LA, the plaintiffs successfully claimed Instagram and YouTube were designed in a way that facilitated social media addiction that harmed a teenage user. Meta and Google (and other nervous companies) could plausibly change specific features or be more cautious in their public statements and disclosures. But each case depends on a set of highly specific circumstances, and there’s no one-size-fits-all answer about what needs to change.
Eric Goldman, a legal blogger and expert on Section 230, sees clear legal danger ahead for social media services. “These rulings indicate that juries are willing to impose major liability on social media providers based on claims of social media addiction,” Goldman wrote after the ruling. In an email to The Verge, he noted the issue was bigger than just juries. “Judges are certainly aware of the controversies around social media,” Goldman said. In the Los Angeles case and other upcoming bellwether trials, “the judges have not given social media defendants much benefit of the doubt, which is how the plaintiffs’ novel cases were able to reach trials in the first place.” It’s a situation, he says, that “does feel differently compared to a decade ago.”
Goldman pointed out that New York and California have also passed laws banning “addictive” social media feeds for teens — so even if an appeals court reverses the recent decisions, that won’t necessarily turn back the clock.
The best-case outcome of all this has been laid out by people like Julie Angwin, who wrote in The New York Times that companies should be pushed to change “toxic” features like infinite scrolling, beauty filters that encourage body dysmorphia, and algorithms that prioritize “shocking and crude” content. The worst-case scenario falls along the lines of a piece from Mike Masnick at Techdirt, who argued the rulings spell disaster for smaller social networks that could be sued for letting users post and see First Amendment-protected speech under a vague standard of harm. He noted that the New Mexico case hinged partly on arguing that Meta had harmed kids by providing end-to-end encryption in private messaging, creating an incentive to discontinue a feature that protects users’ privacy — and indeed, Meta discontinued end-to-end encryption on Instagram earlier this month.
“Judges have not given social media defendants much benefit of the doubt.”
Blake Reid, a professor at Colorado Law, is more circumspect. “It’s hard right now to forecast what’s going to happen,” Reid told The Verge in an interview. On Bluesky, he noted that companies will likely look for “cold, calculated” ways to avoid legal liability with the minimum possible disruption, not fundamentally rethink their business models. “There are obviously harms here and it’s pretty important that the tort system clocked those harms” in the recent cases, he told The Verge. “It’s just that what comes in the wake of them is less clear to me.”
While Reid sees legal risks for smaller platforms with fewer resources in these decisions, he’s not convinced they’re more serious than the challenges new entrants already face in a hyper-consolidated online landscape built on massive amounts of data collection. “There are things that make it hard to do something really new in this space that are driven by the sort of marketplace and the surrounding policy,” he said.
Reid, Goldman, and Masnick all warn there’s a clear chance that the fallout could harm marginalized people who use social media to connect. “There will be even stronger pushes to restrict or ban children from social media,” Goldman told The Verge. “This hurts many subpopulations of minors, ranging from LGBTQ teens who will be isolated from communities that can help them navigate their identities to minors on the autism spectrum who can express themselves better online than they can in face-to-face conversations.”
If platforms like Instagram are inherently damaging and directly comparable to gambling or cigarettes, comparisons frequently made by critics, being kicked off would be no great loss. But even research that suggests social media can be harmful for adolescents has associated moderate use with better well-being. Conversely, harmful online content like harassment and eating disorder communities still flourished before recommendation-driven, hyper-optimized modern social media; tinkering with specific algorithmic formulas could have a positive impact, but it’s possible it won’t provide a deep or lasting fix. The appeal of punishing Meta is obvious — what it will mean for everyone else is much less clear.
Read the full article here
