Opinion | Niall Ferguson

How YouTube became YouGag

Patrick Semansky/AP File

I am old enough to remember when Twitter billed itself as “the free speech wing of the free speech party.” Well, “it’s the morning after the free speech party, and the place is trashed.” Don’t take it from me. The words I’ve just quoted come from Adam, a twenty-something content moderator in one of the “trust and safety teams” now employed to detect and remove “hate speech” by Facebook, Google, and the other network platforms.

Last week, YouTube announced that it was “specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation, or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation, or veteran status.” It swiftly became clear what that meant in practice. On Wednesday, as The New York Times reported, “numerous far-right creators began complaining that their videos had been deleted or had been stripped of ads.”


As if to test YouTube out, a Vox journalist named Carlos Maza demanded that it ban Steven Crowder, the host of a raucous political show, on the ground that Crowder had repeatedly made homophobic jokes about him. At first YouTube resisted but soon — partly under pressure from other Google employees — it essentially folded, announcing it had “suspended this channel’s monetization . . . because a pattern of egregious actions has harmed the broader community.”

I knew nothing of either Maza or Crowder until this week. The point of this column is not to defend the latter or the obnoxious “Socialism Is For Fags” T-shirt he sometimes wears on his show. The point I want to make is the more general one, that free speech on the Internet is in free fall.

Crowder has company. Last month Facebook banned not only the conspiracy theorist Alex Jones but also the alt-right provocateur Milo Yiannopoulos, the white supremacist Paul Nehlen, the African-American Muslim zealot Louis Farrakhan, and the nationalist activist Laura Loomer. And those are just the better-known names.


Having previously confined themselves to removing pedophiliac and terrorist content, the big tech companies are now openly engaged in political censorship. Google admits as much: A March 2018 internal presentation was actually entitled “The Good Censor.” What this means in practice is that tens of thousands of content moderators like young Adam are deciding what you can and cannot see online.

In “1984,” Orwell’s vision of the future was “a boot stamping on a human face — forever.” In 2019, it turns out to be a geek hitting “delete” on a keyboard — forever. The point is not who gets censored or demonetized. The point is that companies as big and ubiquitous as Google and Facebook should not have this kind of power.

When so many people now read an article like this after being directed to it by one or other of the tech platforms, it is correct to say that the platforms are — in the words of recently retired Supreme Court Justice Anthony Kennedy — “the modern public square.” Yet they are emphatically not acting in that spirit, unless it was Tiananmen Square he had in mind.

Remember, the First Amendment to the Constitution bars Congress from “abridging the freedom of speech, or of the press,” and the Supreme Court has allowed few exceptions. Much more than in Europe, American courts are reluctant to penalize speech, even when plaintiffs allege defamation, invasion of privacy, or emotional distress.


But none of this applies online, where, in the words of two legal scholars, the big tech companies can “act as legislature, executive, judiciary, and press.” For they are doubly protected. First, the First Amendment is generally held not to apply to private companies. Second, Section 230 of the 1996 Communications Decency Act explicitly states that “interactive computer services” are not publishers (so, unlike newspapers, they can’t be held responsible for bad stuff that appears on their platforms), but they also cannot be “held liable on account of . . . any action voluntarily taken in good faith to restrict access to or availability of material that [they] consider to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” So they can’t be accused of restricting free speech when they delete bad stuff.

To call Section 233 — enacted when the Internet was in its infancy — an anachronism would be an understatement. It would be more accurate to say that it is the Catch-22 of our time, in that the big tech companies are not publishers when harm arises from the content on their platforms, but they are publishers when they engage in censorship. Either way, they have minimal legal liability.

Yet Section 230 (a)(3) explicitly assumed that online platforms would “offer a forum for a true diversity of political discourse.” And the phrase “or otherwise objectionable” was never intended to cover political positions.


These days in Washington, there is a great deal of discussion of “breaking up big tech” by resuscitating or reforming antitrust law. Other voices (including, suspiciously, the big tech companies) clamor for more regulation.

But the free speech crisis can and should be addressed more simply. The network platforms handle far too much content to be effective publishers. They are entitled to Section 230’s protection — but only if they uphold the diversity of discourse envisaged by Congress.

The alternative is to repeal Section 230 and impose on big tech something like a First Amendment obligation not to limit free speech. Speaking as one of the last surviving members of the free speech party, I’d prefer that second option. But either would be an improvement on that geek hitting “delete” on a keyboard forever.

Niall Ferguson is the Milbank Family Senior Fellow at the Hoover Institution, Stanford