In an unsigned per curiam opinion yesterday in Gonzalez v. Google, the U.S. Supreme Court vacated the Ninth Circuit’s judgment— which had held that plaintiffs’ complaint was barred by Section 230 of the Communications Decency Act – and remanded it. But the Court’s opinion entirely skirted a highly-anticipated issue: whether Section 230 does, in fact, shelter as much activity as courts have held to date.
This term was the first time the Court seemed to be open to interpreting that statute. In October 2022, it granted certiorari in two related cases exploring the interaction between anti-terrorism laws and Section 230 of the Communications Decency Act. Yesterday, as Justice Clarence Thomas wrote in a unanimous decision in the related Twitter v. Taamneh case, “Plaintiffs’ allegations are insufficient to establish that these defendants aided and abetted ISIS in carrying out the relevant attack.” In its brief opinion in Gonzalez, the Court reiterated that in light of the Twitter opinion, “it has become clear that plaintiffs’ complaint — independent of Section 230 — states little if any claim for relief” (emphasis added).
Section 230 protects interactive computer service providers by stating broadly that they “shall [not] be treated as the publisher or speaker” of third party content, and also by making clear that the platforms are immune from liability as publishers even if they choose to moderate and/or remove third party content they consider to be “objectionable, whether or not such material is constitutionally protected[.]” 47 U.S.C. § 230(c). Over the past quarter century plus, courts have tended to apply that immunity broadly. Since it is available to providers regardless of their size, discussions about Section 230 frequently overlap with discussions about market concentration and antitrust law.
This case is of course cabined by its narrow framing. Gonzalez v. Google concerned whether online platforms can assert Section 230 against claims alleging that their content curation algorithms facilitate terrorist acts. It was filed by the family of Nohemi Gonzalez, a 23-year-old American woman who was killed in an attack while studying abroad in Paris in 2015. Plaintiffs alleged that Google, which owns YouTube, violated the Antiterrorism Act’s ban on aiding and abetting terrorism by (among other things) recommending ISIS videos to users through its algorithms, thereby aiding ISIS’s recruitment. The Ninth Circuit held that Section 230 immunity was available to the platform because “such claims necessarily require the court to treat [such platforms] as a publisher.”
During oral argument, the justices seemed wary of taking steps that would significantly reshape Section 230.
Justice Thomas, for example, has previously expressed his skepticism, in several cases, about how broadly immunity should be granted under Section 230. But even he appeared surprisingly sympathetic to the idea that Section 230 protects recommendations as long as the provider’s algorithm treats content on its website similarly. He emphasized the seemingly neutral nature of these algorithms: “The same algorithm to present cooking videos to people who are interested in cooking and ISIS videos to people who are interested in ISIS, racing videos to people who are interested in racing.”
Justice Jackson emphasized the original purpose of the Communications Decency Act – protecting internet users from indecent internet content – and was skeptical that Section 230 could be properly interpreted to protect platforms promoting offensive material.
Several justices pointed out that, even if Section 230 is not well suited to address current issues online, revisiting this massive liability question is best left to Congress, rather than the Supreme Court. As Justice Kagan memorably put it, “we’re a court. We really don’t know about these things … these are not like the nine greatest experts on the Internet.”
Antitrust Legislation and Litigation in the Wake of the Supreme Court’s Ruling
There were concerns that narrowing Section 230’s protection could lead to a wave of lawsuits against websites alleging antitrust violations, among other theories of liability. But the Court’s total avoidance of the issue could put a spotlight on the many legislative proposals for antitrust-focused Section 230 reform.
One fast-moving area of the law that continues to operate in a Section 230 gray zone is artificial intelligence (AI), and specifically generative AI. Generative AI, as we’ve seen in recent months, could transform the way online services deliver content to users. If antitrust law is interested in “protect[ing] economic freedom and opportunity by promoting free and fair competition in the marketplace,” then it could make sense to expand Section 230 to protect at least some kinds of AI-generated content. At the moment, since generative AI by definition produces new content, rather than presenting or repackaging existing content, it is hard to imagine that Section 230 does currently shield these kinds of platforms from liability for the content that their products generate. Indeed, at oral argument for Gonzalez, Justice Gorsuch indicated that he thought this kind of AI-generated content would fall outside the scope of Section 230. Even though the Court did not take the opportunity in Gonzalez to clarify the proper scope of the protection, given the specific facts of the case, the area remains ripe for updating.