The Third Circuit’s Section 230 Decision in Anderson v. TikTok Is Nonsense
Baffled, torpid, inimical nonsense.
This piece originally appeared at Techdirt. Many thanks to Mike Masnick for letting me cross-post it here.
Last week, the U.S. Court of Appeals for the Third Circuit concluded, in Anderson v. TikTok, that algorithmic recommendations aren’t protected by Section 230. Because they’re the platforms’ First Amendment-protected expression, the court reasoned, algorithms are the platforms’ “own first-party speech,” and thus fall outside Section 230’s liability shield for the publication of third-party speech.
Of course, a platform’s decision to host a third party’s speech at all is also First Amendment-protected expression. By the Third Circuit’s logic, then, such hosting decisions, too, are a platform’s “own first-party speech” unprotected by Section 230.
We’ve already hit (and not for the last time) the key problem with the Third Circuit’s analysis. “Given … that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms,” the court declared, “it follows that doing so amounts to first-party speech under [Section] 230, too.” No, it does not. Assuming a lack of overlap between First Amendment protection and Section 230 protection is a basic mistake.
Section 230(c)(1) says that a website shall not be “treated as the publisher” of most third-party content it hosts and spreads. Under the ordinary meaning of the word, a “publisher” prepares information for distribution and disseminates it to the public. Under Section 230, therefore, a website is protected from liability for posting, removing, arranging, and otherwise organizing third-party content. In other words, Section 230 protects a website as it fulfills a publisher’s traditional role. And one of Section 230’s stated purposes is to “promote the continued development of the Internet”—so the statute plainly envisions the protection of new, technology-driven publishing tools as well.
The plaintiffs in Anderson are not the first to contend that websites lose Section 230 protection when they use fancy algorithms to make publishing decisions. Several notable court rulings (all of them unceremoniously brushed aside by the Third Circuit, as we shall see) reject the notion that algorithms are special.
The Second Circuit’s 2019 decision in Force v. Facebook is especially instructive. The plaintiffs there argued that “Facebook’s algorithms make … content more ‘visible,’ ‘available,’ and ‘usable.’” They asserted that “Facebook’s algorithms suggest third-party content to users ‘based on what Facebook believes will cause the user to use Facebook as much as possible,’” and that “Facebook intends to ‘influence’ consumers’ responses to that content.” As in Anderson, the plaintiffs insisted that algorithms are a distinct form of speech, belonging to the platform and unprotected by Section 230.
The Second Circuit was unpersuaded. Nothing in the text of Section 230, it observed, suggests that a website “is not the ‘publisher’ of third-party information when it uses tools such as algorithms that are designed to match that information with a consumer's interests.” In fact, it noted, the use of such tools promotes Congress’s express policy “to promote the continued development of the Internet.”
By “making information more available,” the Second Circuit wrote, Facebook was engaging in “an essential part of traditional publishing.” It was doing what websites have done “on the Internet since its beginning”—“arranging and distributing third-party information” in a manner that “forms ‘connections’ and ‘matches’ among speakers, content, and viewers of content.” It “would turn Section 230(c)(1) upside down,” the court concluded, to hold that Congress intended to revoke Section 230 protection from websites that, whether through algorithms or otherwise, “become especially adept at performing the functions of publishers.” The Second Circuit had no authority, in short, to curtail Section 230 on the ground that by deploying algorithms, Facebook had “fulfill[ed] its role as a publisher” too “vigorously.”
As the Second Circuit recognized, it would be exceedingly difficult, if not impossible, to draw logical lines, rooted in law, around how a website arranges third-party content. What in Section 230 would enable a court to distinguish between content placed in a “for you” box, content that pops up in a newsfeed, content that appears at the top of a homepage, and content that’s permitted to exist in the bowels of a site? Nothing. It’s the wrong question. The question is not how the website serves up the content; it’s what makes the content problematic. When, under Section 230, is third-party content also a website’s first-party content? Only, the Second Circuit explained, when the website “directly and materially contributed to what made the content itself unlawful.” This is the “crucial distinction”—presenting unlawful content (protected) versus creating unlawful content (unprotected).
Perhaps you think the problem of drawing non-arbitrary lines around different forms of presentation could be solved, if only we could get the best and brightest judges working on it? Well, the Supreme Court recently tried its luck, and it failed miserably. To understand the difficulties with excluding algorithmic recommendations from Section 230, all the Third Circuit had to do was meditate on the oral argument in Gonzalez v. Google. It was widely assumed that the justices took that case because at least some of them wanted to carve algorithms out of Section 230. How hard could it be? But once the rubber hit the road, once they had to look at the matter closely, the justices had not the faintest idea how to do that. They threw up their hands, remanding the case without reaching the merits.
The lesson here is that creating an “algorithm” rule would be rash and wrong—not least because it would involve butchering Section 230 itself—and that opinions such as Force v. Facebook are correct. But instead of taking its cues from the Gonzalez non-decision, the Third Circuit looked to the Supreme Court’s newly released decision in Moody v. NetChoice.
Moody confirms (albeit, alas, in dicta) that social media platforms have a First Amendment right to editorial control over their newsfeeds. The right to editorial control is the right to decide what material to host or block or suppress or promote, including by algorithm. These are all expressive choices. But the Third Circuit homed in on the algorithm piece alone. Because Moody declares algorithms a platform’s protected expression, the Third Circuit claims, a platform does not enjoy Section 230 protection when using an algorithm to recommend third-party content.
The Supreme Court couldn’t coherently separate algorithms from other forms of presentation, and the distinguishing feature of the Third Circuit’s decision is that it never even tries to do so. Moody confirms that choosing to host or block third-party content, too, is a platform’s protected expression. Are those choices “first-party speech” unprotected by Section 230? If so—and the Third Circuit’s logic requires that result—Section 230(c)(1) is a nullity.
This is nonsense. And it’s lazy nonsense to boot. Having treated Moody’s stray lines about algorithms like live hand grenades, the Third Circuit packs up and goes home. Moody doesn’t break new ground; it merely reiterates existing First Amendment principles. Yet the Third Circuit uses Moody as one neat trick to ignore the universe of Section 230 precedent. In a footnote (for some reason, almost all the decision’s analysis appears in footnotes) the court dismisses eight appellate rulings, including Force v. Facebook, that conflict with its ruling. It doesn’t contest the reasoning of these opinions; it just announces that they all “pre-dated [Moody v.] NetChoice.”
Moody roundly rejects the Fifth Circuit’s (bananas) First Amendment analysis in Paxton v. NetChoice. In that faulty decision, the Fifth Circuit wrote that Section 230 “reflects Congress’s factual determination that Platforms are not ‘publishers,’” and that they “are not ‘speaking’ when they host other people’s speech.” Here again is the basic mistake of seeing the First Amendment and Section 230 as mutually exclusive, rather than mutually reinforcing, mechanisms. The Fifth Circuit conflated not treating a platform as a publisher, for purposes of liability, with a platform’s not being a publisher, for purposes of the First Amendment. In reality, websites that disseminate third-party content both exercise First Amendment-protected editorial control and enjoy Section 230 protection from publisher liability.
The Third Circuit fell into this same mode of woolly thinking. The Fifth Circuit concluded that because the platforms enjoy Section 230 protection, they lack First Amendment rights. Wrong. The Supreme Court having now confirmed that the platforms have First Amendment rights, the Third Circuit concluded that they lack Section 230 protection. Wrong again. Congress could not revoke First Amendment rights wherever Section 230 protection exists, and Section 230 would serve no purpose if it did not apply wherever First Amendment rights exist.
Many on the right think, quite irrationally, that narrowing Section 230 would strike a blow against the bogeyman of online “censorship.” Anderson, meanwhile, involved the shocking death of a ten-year-old girl. (A sign, in the view of one conservative judge on the Anderson panel, that social media platforms are dens of iniquity. For a wild ride, check out his concurring opinion.) So there are distorting factors at play. There are forces—a desire to stick it to Big Tech; the urge to find a remedy in a tragic case—pressing judges to misapply the law. Judges engaging in motivated reasoning is bad in itself. But it is especially alarming here, where judges are waging a frontal assault on the great bulwark of the modern internet. These judges seem oblivious to how much damage their attacks, if successful, are likely to cause. They don’t know what they’re doing.