Section 230 shields an ecosystem. Rather than protecting particular platforms or offering separate rules for different sorts of services, it protects all internet intermediaries equally, regardless of their size, purpose, or policies. Under this uniform, predicable arrangement, specific platforms may set their own rules, choosing to cater to mass audiences or niche subcultures and governing their services accordingly. Diversity of opinion marks the whole system but not every platform therein. This liberal, decentralized approach remains the best mechanism for ensuring freedom of speech online.

Section 230 was intended to let a thousand platforms bloom, ensuring that, according to the Congressional findings that precede the bill’s substantive sections:

“The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”

Crucially, this expectation was made of the internet as a whole, or, “the internet and other interactive computer services,” when taken together, not specific services. Unfortunately, critics of the policies of particular platforms such as Twitter and Facebook increasingly misread this expectation as relating to individual platforms.

In a Federalist Society Blog post titled “Section 230 and the Whole First Amendment,” Craig Parshall, General Counsel of the National Religious Broadcasters, claims that Section 230 was intended to incentivize individual tech platforms to open themselves to all speech.

“The intent behind Section 230 was to incentivize tech platforms to screen out harmful and offensive content while also providing a “forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad of avenues for intellectual activity.

It is time that they be required to live up to their part of the bargain; namely, expressly conditioning their protection under Section 230 in return for their use of a First Amendment free speech paradigm for their decisions on third‐​party content.”

Setting aside the problem of how platforms might be expected to screen “offensive and harmful” material while simultaneously mirroring the First Amendment, by substituting “tech platforms” for “the internet,” Parshall dramatically alters Section 230’s expectations.

More recently, Conservative Partnership Institute Policy Director Rachel Bovard makes the switch in an opinion piece for USA Today;

Internet platforms would receive a liability shield so they could voluntarily screen out harmful content accessible to children, and in return they would provide a forum for “true diversity of political discourse” and “myriad avenues for intellectual activity.”

By narrowing “the internet” to particular “internet platforms,” Bovard and Parshall invent a Section 230 that demands diversity within platforms, rather than between them. The expectation that all platforms offer truly diverse forums amounts to an expectation of uniformity in platform policy. If all platforms must serve as a “forum for a true diversity of political discourse,” none may serve particular communities. This leveling would perversely render the internet as a whole far less diverse than it is today. Instead of Ravelry offering a platform for knitters and offering a home for unfiltered MAGA fandom, Bovard and Parshall would have both platforms host it all. Taking a narrow view of the internet as a handful of major platforms, they propose systemic changes that would put the diversity they ignore on the chopping block.

Their unworkable expectation is at odds with a plain reading of the statute and the intentions of Section 230’s drafters, Representatives Ron Wyden (D-OR) and Chris Cox (R-CA). In a recent letter to the Federal Communications Commission objecting to its efforts to modify the statute via rule‐​making, they write:

In our view as the law’s authors, this requires that government allow a thousand flowers to bloom—not that a single website has to represent every conceivable point of view. The reason that Section 230 does not require political neutrality, and was never intended to do so, is that it would enforce homogeneity: every website would have the same “neutral” point of view. This is the opposite of true diversity.

By allowing individual websites to screen off‐​topic or “otherwise objectionable,” Section 230 ensures that online communities and service providers can chose whatever rules or standards they think most fitting for their particular corner of the internet.

All platforms have rules intended to foster particular sorts and styles of conversation. Some are enforced by moderators or bots, while others are built in to the platform’s architecture. Twitter maintains rules against threats of violence, and the platform will not allow an account to post more than 100 tweets in an hour. Even ostensibly ungoverned platforms maintain rules. 4chan is divided into topic specific image boards for everything from “Papercraft & Origami” to “Adult Cartoons.”

Because online real estate is an unlimited resource, for those who find a given ruleset ill‐​fitting, exit is cheap. Section 230’s intermediary liability protections keep the cost of exit low by preventing platforms from being held liable for their users’ speech. While The Atlantic staff writer Kaitlyn Tiffany calls this capacity for exit “the internet’s structural penchant for hate,” it prevents any single set of platform rules from creating a universal prohibition. Unlike legal speech restrictions, unwanted platform restrictions are intended to be avoided through the creation of competing jurisdictions.

This is particularly important for explicitly dissident alternatives to mainstream platforms. Both and Ovarit were created as off‐​platform alternatives to banned subreddits. For these burgeoning, essentially moderator‐​run forums, the fact that they regularly host speech deemed impermissible by Reddit would serve as a magnet for litigation in the absence of Section 230.

Indeed, at a time when traditional media gatekeepers have deemed migration to Parler “a threat to democracy,” and treat podcast apps as the next front in an unending War on Disinformation, intermediary liability protections are vital speech protections. Advocates of liberal speech governance should refrain from reading expectations of uniformity into Section 230. Undermining protections for diverse approaches to content moderation will serve only to nip alternatives to mainstream platforms in the bud.

Commentary by Will Duffield. Originally published at Cato At Liberty. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s