Design Is the New Tobacco, Architecture as Liability | Meta & YouTube Verdicts
This week's social media rulings actually established, and why anyone building technology should be paying attention.
This is the first of three pieces examining what the March 2026 Meta/YouTube verdicts actually mean. This one looks at the legal precedent and what it signals for anyone building or buying technology. The next will explore the vocabulary gap the jury revealed. The third will examine the deeper orientation failure that got us here.
Yesterday, a Los Angeles jury found Meta and YouTube negligent for designing platforms that addicted young users. They awarded $6 million in compensatory damages, with additional punitive damages of $2.1 million from Meta and $900,000 from Google. Meta was assigned 70% of the responsibility, YouTube 30%.
The day before that, a New Mexico jury ordered Meta to pay $375 million for failing to protect children from predators on its platforms. The New Mexico Attorney General’s office had run an undercover operation using fake accounts posing as children under 14, which quickly received sexually explicit material and contact from adults. The jury found Meta engaged in “unconscionable” trade practices that took advantage of children’s vulnerabilities. A second phase of that trial starts in May, where the court will decide whether Meta must actually change its platforms.
Two verdicts in 48 hours, and they’re the beginning of something larger. The LA case was a bellwether, one of roughly 1,600 consolidated plaintiffs in California alone, including more than 350 families and over 250 school districts. TikTok and Snap settled before this trial even began. A federal trial involving school districts and parents nationwide starts this summer. California’s Attorney General has his own trial coming in August.
Every headline is calling this Big Tech’s “Big Tobacco moment.” The comparison is useful in some ways, but it’s absorbing attention that belongs elsewhere. The most important thing about this verdict has nothing to do with tobacco.
Former FTC commissioner Alvaro Bedoya wrote that "a jury of regular people has managed to do what Congress and even state legislatures have not."
The Analogy Everyone Is Reaching For
The Big Tobacco comparison is everywhere, and it's easy to understand why. The structural parallels are real.
The same law firms that litigated the 1990s tobacco settlements, Motley Rice and Lieff Cabraser among them, are leading these cases. The legal architecture is the same: bellwether trials chosen to signal how arguments play out before juries, consolidated litigation designed to build pressure toward settlement, and an internal-documents strategy that puts corporate knowledge front and center.
The pattern of corporate behavior looks familiar, too. Internal Meta documents showed the company knowingly targeted preteens. One memo: "If we wanna win big with teens, we must bring them in as tweens." Eleven-year-olds were four times as likely to keep returning to Instagram as compared to competing apps, despite the platform requiring users to be at least 13. Meta's own research told them their platform could harm young users, and the company used those findings to increase engagement rather than reduce risk.
Internal knowledge of harm. Targeting vulnerable populations. Optimizing for engagement over safety. You can hear the echo of tobacco executives testifying before Congress that nicotine wasn't addictive. Senator Ed Markey said it plainly after the verdict: "Big Tech's Big Tobacco moment has arrived." Former FTC commissioner Alvaro Bedoya wrote that "a jury of regular people has managed to do what Congress and even state legislatures have not."
Instagram head Adam Mosseri testified that social media use can be "problematic" but not "clinically addictive." Google's YouTube VP of Engineering, Cristos Goodrow, testified that his own children use the platform for hours daily and he believes it's "good" for them. Google's spokesperson, in a statement after the verdict, called YouTube "a responsibly built streaming platform, not a social media site." That's not a factual claim. That's a definitional defense: if you can control the category, you can control the liability.
These are the sounds a threatened industry makes. We’ve heard them before.
But the analogy, useful as it is, obscures the real precedent.
Where the Analogy Breaks
Tobacco’s harm was molecular. Nicotine binds to receptors. Smoke damages tissue. The causal chain was physical, measurable, and eventually undeniable, and the legal system could draw on established toxicology to build its case.
Social media's harm is different in kind, not just in degree. The damage doesn't come from a substance you ingest. It comes from the design of the environment you inhabit.
The platform shapes what you see,1 when you see it, how you feel about it, and what you do next. Infinite scroll eliminates natural stopping points. Algorithmic recommendations serve personalized content tuned to maximize user engagement, measured through likes, time spent, reposts, and other metrics.2 Push notifications pull you back in. Beauty filters that the company's own employees and 18 outside experts flagged as harmful to body image were kept in place anyway. These are all architectural choices about how the environment works, and the harm they produce is behavioral, experiential, and genuinely difficult to measure with the tools we currently have.3
The American Enterprise Institute has pointed out several real complications with the tobacco frame that are worth taking seriously: social media platforms involve First Amendment-protected speech in a way that tobacco products do not. The harm is behavioral rather than chemical. And “social media addiction” is not a recognized psychiatric diagnosis in the current DSM-5-TR.4 These are genuine objections, and they point to something important about how new this legal territory actually is.
The jury found liability anyway, and the way they got there is what matters most.
This verdict establishes something more general: that the way you design a system that shapes human behavior can create liability, full stop.
Legal Innovation: The Seam in Section 230
For decades, Section 230 of the Communications Decency Act has shielded tech platforms from liability for user-generated content. Whenever someone sued over harm linked to social media, companies invoked Section 230, and cases typically died early.
The plaintiffs in this case found the seam.
Their strategy was to target platform design rather than platform content. Not “your users posted harmful things” but “you built a machine that shapes behavior in harmful ways, and you knew it.” As one legal analysis put it: “these lawsuits are not about who posted what. They target the design architecture itself.”
Judge Carolyn Kuhl’s November 2025 ruling is what made this possible. She drew a distinction between features related to content publishing, which Section 230 might protect, and features like notification timing, engagement loops, and the absence of meaningful parental controls, which it might not. She established that treating algorithmic design choices as the company’s own conduct, rather than as the protected publication of third-party speech, was a viable legal theory for a jury to evaluate.
That distinction, conduct versus content, is a potential roadmap for courts nationwide. And the jury just validated it.
This is what the Big Tobacco analogy obscures. The precedent here isn't that a big company got caught being harmful. It's that a jury ruled architecture itself can be defective. Not the content on the platform. The design of the platform. The shape of the environment.
The way you design a system that shapes human behavior can create liability, full stop.
That principle does not stop at Instagram and YouTube.
Where This Leads
If platform design can be a defect, then the principle applies to every system whose architecture shapes user behavior without adequate transparency or safeguards. And the cases are already extending.
AI chatbots are next. In January, Google and Character.AI settled multiple lawsuits alleging that Character.AI‘s chatbots contributed to teen suicides and severe psychological harm. In those cases, the “platform” was a simulated person. The “engagement loop” was emotional dependency. The “design defect” was a system that could not recognize escalating distress, trigger crisis intervention, or alert a guardian. OpenAI faces similar lawsuits. Legal analysis from firms like McGuireWoods is already asking the question explicitly: can AI be a defective product?
If recommendation algorithms are defective design, what about AI agents that take actions on behalf of users? The design liability surface gets larger, not smaller, as systems become more autonomous.
Enterprise tools are on this trajectory. This connects directly to the competitive landscape I wrote about in “Big Tech Bets.” Organizations making build-vs-buy decisions about AI tooling now need to think about design liability, not just feature sets and pricing. If you deploy an AI system that shapes employee behavior, customer interactions, or decision-making in ways that cause harm, and you chose that design, or chose a vendor whose design you didn’t scrutinize, the question of who’s liable is no longer clearly settled. The answer used to be “not us.” After this verdict, that assumption deserves a second look.
The regulatory wave is converging. Australia banned social media for users under 16. More than 40 U.S. state attorneys general have filed suits against Meta. Several states are passing their own protective legislation. Congress is considering the Sunset Section 230 Act. The California AG has a trial coming in August. Design choices are becoming compliance decisions, and the window for getting ahead of that shift is closing.
Update, March 26 morning: As this piece was being published, Sportico reported that the same defective-design legal theory is already being applied to sports betting apps, with a lawsuit alleging DraftKings and FanDuel designed their platforms to encourage addictive behavior.
The Strategy Gap, Again
I return to the same observation from different directions in this series. In “Big Tech Bets,” it was the competitive terrain that founders and buyers aren’t reading clearly enough. In “Dressed as Disruption,” it was the gap between the narrative of AI transformation and the underlying reality. Here, it's the gap between how fast design decisions are made and how slowly accountability frameworks catch up.
The through-line: the strategy gap is bigger than the technology gap.
Companies that treat design as a purely technical or UX decision, separate from legal, ethical, and strategic risk, are operating with an outdated map. This verdict doesn’t mean “all engagement is bad” or “every design choice is a liability.” It means: if your design shapes user behavior, you need to understand how, you need to be honest about it, and you need to take responsibility for the outcomes.
That’s not a legal standard, not yet. It’s a strategic posture. And the companies that adopt it now, proactively, will be in a fundamentally different position than the ones that wait for juries to impose it retroactively.
The market, for its part, shrugged. Meta’s stock went up 5% after the New Mexico verdict. The financial system and the legal system are, at this moment, operating in different realities. That gap won’t last forever. It rarely does.5
What Comes Next in This Series
The jury deliberated for nearly 44 hours over nine days before reaching this verdict. At one point they told the judge they couldn’t agree on one of the defendants. They were sent back to try again.
That struggle is itself a story, and it’s the subject of the next piece: the vocabulary gap. We are retroactively building accountability for systems that outpaced our ability to name what they did to us, and we’re doing it with borrowed language from domains where it doesn’t quite fit. “Addiction” from substance abuse. “Defective design” from physical products. “Negligence” from tort law. Every term is carrying weight it wasn’t built for.
The third piece will pull back further and ask the bigger question: what does it mean that our default mode for understanding powerful systems is after-the-fact, in courtrooms, at enormous cost? And what would it look like to build the capacity to orient before the damage is done?
For now, the takeaway from yesterday is narrower but concrete: a jury just established that the way you build something can make it defective, even if what people do with it is their own choice. That principle has been tested, and it held. The companies, the builders, and the buyers who take it seriously now will be ahead of the ones who wait for the next verdict.
This is part of Operating Conditions, a series about the strategic landscape leaders and builders are navigating. Previous entries include “Big Tech Bets: The Competitor No One Puts in the Pitch Deck“ and “Dressed as Disruption: Everyone’s Wearing It This Quarter.”
Notes & Caveats
The LA verdict will almost certainly be appealed by both Meta and Google; a Meta spokesperson said they “respectfully disagree with the verdict” and Google called the case a misunderstanding of YouTube. The dollar amounts ($6M + punitive in LA, $375M in NM) are individually small relative to Meta’s $1.5 trillion market cap. The precedent, and the 1,600+ cases behind it, is where the real exposure lives. I’m not a lawyer, and nothing here is legal advice. What I am is someone who watches how technology decisions and business strategy interact, and this verdict changes the calculus for both.
This morning: Amnesty International called the verdict "a landmark moment" and called for mandatory design changes to guarantee online safety for children. Worth noting as additional color but probably doesn't need to go in the piece itself, since the piece already has enough institutional reaction quotes.
Marshall McLuhan's observation that "the medium is the message" keeps proving itself out. The content on Instagram matters, obviously, but the lawsuit was about something McLuhan would have recognized immediately: the medium itself, the way the platform structures attention and experience, is the thing doing the shaping. The content is almost beside the point when the container is designed to be compulsive. McLuhan was writing about television in 1964; the principle has only become more literal since. See: Marshall McLuhan, Understanding Media: The Extensions of Man (1964).
The Congressional Research Service notes that many social media platforms use algorithms to recommend content to maximize user engagement. A 2024 study in PNAS Nexus found that these personalized recommendations drive between 75 and 95% of consumption on platforms. A separate 2023 study published in Nature Human Behaviour demonstrated that algorithms designed to maximize engagement prioritize content that appeals to users' immediate impulses rather than what they would reflectively choose, and that users report being better off when algorithms target their stated preferences rather than maximizing clicks.
Measuring the behavioral and psychological harm of platform design is an active and unsettled research problem. The difficulty is partly methodological (isolating the effect of a design feature from everything else in a young person's life) and partly structural (platforms control the data researchers would need, and rarely share it). Worth noting: several groups are working on exactly this problem from different angles. At MIT Media Lab, the Fluid Interfaces, Cyborg Psychology, and AHA (Advancing Human Agency) groups are developing frameworks for understanding how designed systems interact with human cognition and agency. The Oxford Internet Institute has been a major contributor to the empirical evidence base. And at JOPRO, our Data x Direction and DigiNEST working groups are approaching the measurement question from an epistemic and methodological angle: what tools and frameworks would you need to evaluate these systems in real time, rather than reconstructing harm after the fact?
The diagnostic landscape here is genuinely complicated and worth a brief note. The DSM-5-TR recognizes only one behavioral addiction: gambling disorder. Internet Gaming Disorder is listed in Section III as a "condition for further study," but social media use disorder has no formal entry. The WHO's ICD-11 recognizes Gaming Disorder as a behavioral addiction, and researchers have begun developing assessment tools that adapt ICD-11 criteria to social media use. A 2025 paper in the Journal of Behavioral Addictions recently proposed clinical diagnostic criteria for Social Media Use Disorder, integrating features from both the DSM-5 and ICD-11 frameworks. The field is moving toward formal recognition, but it hasn't arrived yet, and the gap between clinical consensus and legal proceedings is part of what made this trial so difficult for the jury. Worth noting in this context: JOPRO's Mental Health Paradigms and Perspectives project is looking into the history and future pathways of the DSM itself, including how diagnostic categories evolve in response to new kinds of harm.
But 2026 is indeed the year of (some entities) trying to get away with whatever is possible, before reckoning occurs. More ahead on that as well.


