Podcast: Considering the future of Section 230 (with Ben Sperry)

On this week’s episode of the podcast, I am joined by Ben Sperry, a senior scholar at the International Center for Law and Economics, to explore the shifting landscape of social media litigation. We dive deep into recent court rulings in California and New Mexico that challenge the historical protections of Section 230 by focusing on product design rather than hosted content (see a recent piece by Ben, “Treating Speech as a Bug, Not a Feature,” for more background). Among other things, we discuss:
- Why the shift from content-based liability to product design claims might permanently dismantle the protections afforded by Section 230
- How the threat of punitive damages may force platforms to implement aggressive age-gating and collateral censorship measures
- Whether applying product liability standards to algorithmic recommendation features creates friction with First Amendment principles
- Whether smaller tech entrants and startups can survive a legal environment defined by constant litigation and high compliance costs
- What the recent jury verdicts against Meta and Google signal for the future of algorithmic curation across the broader internet ecosystem
- When the focus on addictive design features like infinite scroll will begin to impact other services like streaming platforms
- How generative AI and large language models will be categorized under speech laws if Section 230 remains inapplicable
Thanks to the sponsors of this week’s episode of the Mobile Dev Memo podcast:
- INCRMNTAL. True attribution measures incrementality, always on.
- Xsolla. With the Xsolla Web Shop, you can create a direct storefront, cut fees down to as low as 5%, and keep players engaged with bundles, rewards, and analytics.
- Branch. Branch is an AI-powered MMP, connecting every paid, owned, and organic touchpoint so growth teams can see exactly where to put their dollars to bring users in the door and keep them coming back
Interested in sponsoring the Mobile Dev Memo podcast? Contact Mobile Dev Memo advertising.
The Mobile Dev Memo podcast is available on:
Transcript
Eric Seufert: Hello and welcome to the Mobile Dev Memo podcast. I am your host, Eric Seufert, and I am joined today by Ben Sperry. Ben, welcome to the podcast.
Ben Sperry: Thanks for having me. I really appreciate this opportunity.
ES: You are involved with the ICLE, but you were not in Rome last month, if I remember correctly. I do not remember seeing you there.
BS: Not all of us had the privilege of being invited out there. We are separated into two silos. The competition silo made its way to Rome. The innovation silo, for the most part, stayed stateside, though a few people based in Europe might have been there.
ES: You missed a nice event. It was very well organized, with a beautiful venue and interesting panels. I had a good time, and I do not mean to rub it in. We are going to talk all about these recent social media court cases: the New Mexico case and the California case. Before we dive into that, please introduce yourself to the audience.
BS: My name is Ben Sperry. I am a senior scholar of innovation policy at the International Center for Law and Economics. My research focuses on the intersection of civil liberties and government regulation, including online speech and platform regulation. Importantly, I was the principal author on an amicus brief in Massachusetts versus Meta, a case that is very similar in its underlying arguments to the two jury trials we will be talking about today. I have also written extensively on the First Amendment, Section 230, and the law and economics of products liability applied to online speech platforms. The core point of my writing on these issues is that applying products liability to online speech platforms is a difficult fit. It could result in a lot of collateral censorship, first when those platforms seek to avoid liability for user-generated speech by restricting the speech itself, second when platforms place restrictions on minors accessing protected First Amendment speech, and third when speech platforms become less engaging and interesting if they are potentially liable for how speech is presented. That is my background and my writing on this issue.
ES: These cases are very relevant. I imagine most people listening are familiar at least superficially with the cases, but the best place to start this conversation would be a deep dive into those two cases. We are recording this on Friday. The cases were decided last week. Give us an overview of the cases in California and New Mexico.
BS: The California case, which is KGM versus Meta, is a private products liability case brought on behalf of a young lady for injuries she suffered as a minor. It has been described as a bellwether case because there are 1,600 plaintiffs suing in California alone who have been consolidated in another case moving through the pipeline. Across the United States as a whole, there are thousands of similar lawsuits pending. I have seen one estimate of over 10,000 for individuals and almost 800 for school districts.
New Mexico versus Meta is a consumer protection case brought by the state itself. 40 other states’ attorneys general have filed similar claims against Meta, but this is the first to get all the way to a jury trial. In New Mexico, the jury found in favor of the state, finding Meta should pay $375 million in damages for failing to protect young users from child predators on Instagram and Facebook. The New Mexico jury also found Meta was responsible for misleading consumers about the safety of its platforms. Both these claims were under state consumer protection law. In California, the jurors concluded Meta and Google should pay the woman $3 million in compensatory damages and an additional $3 million in punitive damages for its product design features, with Meta being on the hook for 70% of that amount. While these are not very big amounts for Meta or Google in this individual case, as I mentioned, tens of thousands of similar cases as well as potential class actions representing even more users could result in substantially higher damage amounts if the same outcome results. Those are the two main cases we are talking about.
ES: What were the core legal theories used by the plaintiffs in each of these cases and how do they fundamentally differ? The compensation awarded in the California case was much less than in the New Mexico case. Talk to me about the different legal theories used.
BS: The legal theories shared a lot of similarities because they are both product design cases that attempt to focus on how the speech platforms were designed rather than the underlying user-generated speech. The California case makes this explicit, with the plaintiff alleging products liability claims that the social media companies caused her mental health harms related to usage with those platforms by addictive design features like autoplay, infinite scroll, ephemeral content, notifications, and algorithmic recommendations. There was also an allegation that the platforms did not do enough to verify the ages of its users before allowing them to create profiles.
The New Mexico case proceeds under consumer protection law, specifically the Unfair Practices Act. Using its authority over unfair and deceptive trade practices and unconscionable trade practices in the conduct of any trade or commerce, the state similarly argues that Meta’s design features addict young people and expose them to dangerous content related to things like eating disorders and self-harm. But the bigger focus in New Mexico was that Meta’s design features also enabled predators and pedophiles to engage in child sex exploitation as well as share child sexual abuse material or CSAM. The unfairness claims were saying that these design features were harmful to consumers, and the deception claims were focusing on how the features were contrary to Meta’s public commitments to providing safety to users. The reward was also bigger in New Mexico in part because it was brought on behalf of all the users and parents in New Mexico and not just one individual plaintiff. But it was also a focus for the jury on this child predator CSAM aspect of what they thought Meta did not do enough to protect against.
ES: How did both of these states successfully bypass Section 230 immunity to bring the cases before the jury? You would imagine that is the fundamental basis for Section 230. Was it the mechanics of the design that allowed them to get around it?
BS: Both cases considered Section 230 and the First Amendment in challenges before the jury trials took place. Both courts accepted the framing that these cases were not about the underlying content but about the conduct of the social media companies and how they design their platforms. For instance, on Section 230, the California judge rejected the use of a “but-for” test that would provide Section 230 immunity solely because the cause of action would not otherwise exist but for third-party content. They concluded that the fact a design feature like infinite scroll impelled a user to continue to consume content that proved harmful does not mean there can be no liability for harm arising from the design feature itself.
On the First Amendment question, the court said allegedly addictive features of the defendants’ platforms, such as endless scroll, cannot be analogized to how a publisher chooses to make a compilation of information but rather are based on harm allegedly caused by those design features regardless of the third-party content viewed. This is despite the fact that both cases were ultimately about the types of content that were harmful to users, whether we are talking about content glorifying self-harm, triggering body image problems, or other mental issues associated with social media companies. Contrary to what the courts said before these jury trials started, many federal courts have found that Section 230 applies to claims ultimately about third-party content and that the First Amendment protects the publication of speech from products liability claims that would interfere with the underlying speech expression.
For instance, a court dismissed a products liability claim against Netflix some years ago over its series “13 Reasons Why,” stating that the plaintiff’s efforts to distance the claims from the content of the show itself do not persuade. Without the content, there would be no claim. The claim there was that Netflix and its algorithmic recommendations for that specific show, and not putting some kind of warning that kids should not watch this as it glorifies suicide, should have given rise to Netflix being liable. The court rejected that as being basically inconsistent with how we think about the First Amendment. To see how this is obvious here, consider a hypothetical world where the social media companies used the same addictive design features but only hosted variations of those fireplace videos you can find. Would we believe these features are causing harms to minors or anyone else in that case? Maybe in the sense that they are extremely boring and therefore depressing, but not in any sense that they are actually actionable.
ES: These decisions would probably motivate a lot more lawsuits for these companies. Infinite scroll is not unique to Facebook or Instagram. Netflix does algorithmic curation. Spotify does algorithmic curation. These are commonplace across the consumer tech landscape. Do you foresee that happening, or is there an element of this that is specific to social media?
BS: These are common things. Algorithmic recommendations are common across the entire internet ecosphere. Almost everything uses them, from search engines to streaming services like Netflix and Spotify. Meta and Google may be well positioned as established players with a lot of revenue to figure out compliance. They might be able to redesign their platforms, figure out a way to age-gate, and restrict younger users from using the adult version. It would be costly, but they might be able to do it. They might even be able to afford these damages from juries and settlements in the future. The problem is for smaller players or other entrants into these spaces who might not be able to afford compliance or the threat of litigation. That is what Section 230 is supposed to be about: to protect against ruinous litigation and regulation that stifles competition and innovation online.
It could open the floodgates to some degree. There are all these pending lawsuits by school districts and states and other possible class actions. If the courts continue to follow this line of reasoning on Section 230, there is going to be a lot of litigation. The question will then be how products liability type suits interact with the First Amendment. Courts have been uncomfortable in the past with applying products liability to protected First Amendment speech, even when it leads to harm. It is only when the speaker makes some kind of explicit promise to its hearers or readers of safety that these things are usually found actionable at all. Even how speech is presented is protected. If a newspaper in its editorial judgment can decide to use a big font for a headline, tease a story, put it on another page, and then put an ad beside it, is that an addictive design feature that made you want to keep reading? If Netflix or a streaming video service ends on a cliffhanger and has autoplay for the next episode, is that addictive design? Courts will likely be reticent to say so. The floodgates might be open in a new way, but it will be an interesting time to figure out the exact contours of these claims in light of the First Amendment. I do not think this is the last word.
ES: What are the immediate compliance or operational hurdles that Meta and Google now face? Do they have to do anything before the appeals are decided?
BS: It has been reported in the media that they are planning to appeal. They will likely ask for some kind of stay in the meantime so they do not immediately have to change their systems. But with stuff around the world, they are already having to think about how they are going to either create a separate product for younger users or start age-gating or age assurance. These things are already largely in the works. They might be able to afford to figure these things out in a way that smaller or newer competitors may not. It is going to have effects far-reaching beyond the social media context. Anybody that uses similar features may need to be very aware of how that could be a problem going forward if this is the way courts are looking at this now.
ES: Is there any sense now of what product changes they would be mandated to make? Is one of those just age-gating, applying only to children, or is it deeper than that? If you say infinite scroll is predatory, that changes the entire construction of the consumer internet. Do we have a sense now of what changes they might be mandated to make?
BS: It will be interesting because the damages awards almost have to be enough for the company to care. Everybody remembers the case where a lady poured hot coffee on herself at a McDonald’s drive-through. It did have a real-world effect. McDonald’s lowered the temperature of its coffee, which was probably way too hot if it could lead to the degree of burns that lady had. It was not even the compensatory damages; it was the punitive damages and the threat thereof if other people had the same thing happen to them that made McDonald’s change. Here, it is the threat that if other cases go the same way and you start to multiply the plaintiffs many times over, it might make them change what they do. Practically, it might not be an injunction against certain practices, but they might end up having to say that juries have decided we either need to age-gate, which comes with its own First Amendment questions, or we need to get rid of algorithmic recommendations, infinite scroll, and ephemeral content for everybody. Which one they decide to do may depend on how they decide if this is just an issue with minors or if this potentially opens them up to liability for everyone else too.
ES: You wrote a great piece about the treatment of free speech in these cases. You said these cases treated Meta and Google’s platforms not as forms for speech but as engineered products capable of causing harm. What are the implications here for free speech?
BS: The potential implications for free speech are serious. To avoid liability, both the curation and the presentation of speech would likely be affected. Contrary to the claims of the plaintiffs, it is clearly the underlying speech that is hosted that will have to be considered if these jury verdicts stand. Speech platforms will probably remove a lot of speech that is protected by First Amendment law if it can be seen as potentially harmful to minors. How speech is presented is the target of these lawsuits, which could lead to the removal of these features that are allegedly addictive. But that could also make these platforms pretty dull to use, and that could be to the detriment of both users and content creators. Imagine if this podcast is no longer recommended to those who view similar content, or if its fans no longer receive notifications for a new episode. That would be a harm to them, but it would also be a harm to the content creators. Even who is allowed to access speech platforms themselves could be affected. If minors, who are not a great source of revenue anyway, become too costly to serve because of the threat of liability, the answer will probably be to age-gate them through strong age verification and banning them. This would likely run into First Amendment issues. We have precedents that say you cannot age-gate for protected speech; that is unconstitutional if done by statute. It seems odd that you could get the same effect by alleging that it is negligent or somehow a products liability or consumer protection problem because they did not do enough to age-verify. It seems a little incongruous, but that is the result that could end up happening: restricting minors from using speech platforms that are largely just hosting speech that is lawful to them. We are talking about, for the most part, videos of other kids dancing and stuff like that. A lot of the speech is protected speech even as to children on these platforms.
ES: It is not just a tool for consuming content; it is a tool for broadcasting content. You would be cutting people off from a mechanism for communication.
BS: For self-expression, yes. Minors themselves are able to create a lot of content, whether that is just for their own friends and family or, in some cases, content creators who have substantial revenue as a result. Regardless, it is speech, and they are cut off from not only participating by receiving speech but being speakers themselves.
ES: One thing that came to mind is Roblox. It is a game publishing platform that is really popular with kids who build their own games and distribute them on the platform. If these are truly considered mechanisms designed with the intent to addict people, and in particular children, there are very specific sandboxes designed for children that use this. These are billion-dollar companies that are publicly traded. I do not know where you end with this. Social media is a place to broadcast thoughts and videos, but Roblox is a platform for building a product and deploying it. They nonetheless have algorithmic curation and discovery features. It seems weird that this was the first place to start. If you wanted to explore these issues related to children, it feels like the products designed to be used by children would be the more obvious starting point.
BS: You are right about that. When it comes to something like Roblox and other similar platforms, what they would need to be cognizant of, especially after the New Mexico jury verdict, is how their platforms could be used by child predators to connect them with minors. That is where they are going to need to be wary. While I am a little skeptical of the addictive feature product design stuff, companies should probably be very aware of preventing unknown adults from interacting with children or creating opportunities for them to do so. That is where these platforms need to be especially cognizant and where regulators, enforcers, and investigators should have been focusing, more so than whether we like algorithmic recommendations or likes or autoplay. Maybe they should have been focusing on whether these companies are doing enough to prevent unwanted actions by adults to go after kids and use their platforms to do so. There is always a limit and a tradeoff. You can use a phone, text messaging, or emails to go after kids too, but those are general-purpose products with many lawful uses, so we do not restrict how they are used in general because they can also be used to be terrible things. To some extent, instant messaging on these platforms could be analogized to that. There is a limit to what you can do, but as a general matter, you do not want to create a platform where adults can use the targeting abilities of those platforms to find children and do predatory things.
ES: The focus on these mechanisms, which are essentially ubiquitous, is an odd way to approach the harm angle as it pertains to children. If you wanted to specifically scope this to the harms for children, it seems odd to start with a broad-based general-purpose social media platform where children are not even the majority of users, when there are platforms specifically designed for children.
BS: You are right about that. We do have COPPA. It does not quite get at all of what we are talking about, but that is a federal law designed to protect the privacy of children under 13, and there have been proposals to extend it. There are other ways that this could be targeted to more specifically get at real harms that everybody agrees are harms to children. These cases, the New Mexico case to some extent tries to go after those things, but they are also way broader than that and will have implications way broader than that for free speech.
ES: I am very familiar with COPPA and it is very relevant in the mobile gaming space. We always endeavored to go further than COPPA because we wanted to protect children; it was the right thing to do. It seems like these cases, from a layman’s perspective, are using this very broad brush in painting these ubiquitous mechanisms as addictive. You either went after the biggest platforms just because they have a lot of money, or you are actually more focused on the mechanisms than you are protecting children. Either of those feels like not how you would pursue this if you truly had the intent of remaking a system that you felt was disproportionately damaging to children.
BS: I think you are right. For litigators, both private plaintiffs’ lawyers and even state AGs, when you have a hammer, everything looks like a nail. They use the tools they are familiar with and they have at their disposal to go after the deep pockets and entities that do not have the best PR right now. They focused on those instead of going after the worst actors first.
ES: Could holding a platform liable for addictive design force companies to over-censor lawful but intense speech? If you think about people having debates on Facebook that get heated because they are passionate, is that a potential outcome?
BS: Social media companies will be extremely cognizant of speech that could cross the line, both in the bullying context and other contexts. The law on true threats under the First Amendment limits what prosecutors or the law itself can do, but the fear of liability will likely lead to the removal of a lot of protected speech. This could include things like inside jokes, satire, and other things that would not amount to true threats or harassment because of the difficulty of an algorithm or even a third-party person to understand that specific context. For example, I used to be a public defender. I once represented a young man who was picked up for public drunkenness who had a long history with a set of small-town cops. He threatened both officers in colorful and wild language. But the jury only convicted him of one of the counts of terroristic threats. I was able to successfully argue that one of the sets of threats was actually protected speech because they were not objectively capable of being taken as true threats. But the other charge against the other cop was considered a true threat because the defendant mentioned where the cop lived and the age of his daughter; he gave some personally identifiable information suggesting he actually might mean that one. Platforms are unlikely to make these fine distinctions if the underlying liability for bullying starts to extend to them. Both the protected and unprotected speech will likely just be removed. We see this in cases even outside of that context, like when government officials do not like parody or satire that is targeted at them. They quickly just label it misinformation and put pressure on social media companies to take it down. It would be one thing if these platforms were making these moderation decisions as private entities responding to market signals, but it is quite another if what drives their decisions is actually state action under the threat thereof. Under the First Amendment, private actors and not the government are supposed to drive editorial decisions. We call it the marketplace of ideas. We want social media companies to remove things because they see it as the right thing to do, but it is a very different thing than being told that if we do not do this, we are facing the threat of litigation or regulation. Backdoor censorship, especially that done in the name of protecting children, could increase. Just the threat of being sued under consumer protection laws could be enough for platforms to ask what you want us to do so you do not sue us. That might not even come to the light of day. It doesn’t have to file a suit for that to happen. My hope is that appeals courts will get these cases right and that other courts won’t be allowed to go down the same path. If they don’t, I think free speech is in trouble online.
ES: What is the scope? This ends up applying to the entire internet. Every company has to realign around the risk. Every product you use—Amazon, Netflix, Spotify—uses algorithmic curation, likes, or shares.
BS: It goes way beyond social media. If it becomes about algorithmic recommendations automatically making you liable for whatever speech we are talking about, search, YouTube, Netflix, streaming services, and your podcasting service will be implicated. Anything that involves a like or a share or a retweet is going to be implicated. It could be disastrous not only for speech but for how we do things online.
ES: Amazon is a great example. If we are saying every time they give you a recommendation for a new product based on your prior buying history or search history that they are now liable, no matter who the third party is, that could be a real problem. They might have to change because everybody is already complaining that Amazon is doing a lot of first-party selling in competition with third-party sellers. If that is the world we are in, then not having third-party sellers at all might increasingly be in their interest.
ES: What about reviews? Those are people writing there, and those are sorted. The big elephant in the room is chatbots and AI. If there is this responsibility here and you think of algorithmic curation or optimization as a tactic that is not protected, that is essentially what a chatbot is doing.
BS: LLM chatbots are going to be really interesting. I have written in the past that I don’t think chatbots would receive Section 230 immunity because they seem to be more analogous to when you or I read a whole bunch of stuff online and then integrate all that information into new speech. It seems to me like it might be their own speech to some degree. There will still be questions about whose speech it is exactly, like when people try to get LLMs to say specific things through their prompts. At some point, when does that become their own speech, especially when they republish it? A lot of those questions are going to arise more under the First Amendment context than the Section 230 context. In other words, how do new laws or even old products liability laws interact with the First Amendment rather than Section 230? It will be an interesting question. If you end up saying something defamatory because an LLM told you to or answered wrongly, maybe both of you will be liable to some extent. But for a lot of other things that are protected speech, I think it is going to be hard to hold either the chatbot or the person that is interacting with the chatbot liable for those things.
ES: Ben, thank you. This was fantastic. I would point everyone to the article that you wrote on Truth on the Market called “Treating speech as a bug, not a feature.” I found it to be very informative and I am sure our listeners will as well.
BS: I really appreciate the time. If anyone is interested, they can look up my profile on ICLE’s website where we have a whole lot more on the Truth on the Market blog, amicus briefs, regulatory comments, and white papers.
ES: Thanks, Ben. Thanks so much for your time and have a great weekend.
BS: Thank you. Appreciate the time.
Comments: