If there’s one thing tech companies, provocateurs on the far right, and civil libertarians across the political spectrum all tend to say, it’s that they stand up for “freedom of speech.”
Often this claim is offered up, ironically enough, as a conversation stopper: How could you be against freedom of speech? But it’s worth asking what these groups are actually after, because freedom of speech is not an end in itself.
Freedom of speech is a cherished value primarily because it promotes democracy: Because governmental power is held by the people, the people must be able to freely exchange ideas without restraint and without fear of reprisal. Yet many of the same people — including the current president — who say their freedom of expression is inhibited by “censorship” attack or undermine the foundations of democracy.
And while Twitter and Facebook finally took the welcome — if insufficient — steps of cutting off Donald Trump and his associates who have using the platforms to promote the violent overthrow of our freely elected government, this comes only after these and other tech companies have been implicated in the promotion of antidemocratic politics around the world.
There is no doubt that much of what takes place on Twitter and Facebook is real, unfettered exchange about fundamental political issues. Such discussions are often not incredibly “civil,” and they don’t need to be. But much of what is spread by social media — from disinformation to intimidation — strikes at the heart of democratic ideals.
How can anyone argue that democracy’s own core principles require us to let them tear it apart for as long as they want?
The problem stems from the fact that in the United States, and to a lesser extent around the world, we have come to develop an absolutist perspective on free speech. The First Amendment begins “Congress shall make no law,” and that’s often held to mean that government may not touch anything that even looks like speech. But that claim is untrue: Even in the United States, law touches speech in hundreds of ways. For example, speech used in furtherance of a criminal enterprise such as murder or fraud counts as primary evidence of the crime. There are penalties for libelous and slanderous speech. There are full prohibitions on noxious material like depictions of the exploitation of children. Yet technology companies, far-right agitators, and other groups continually present the issue as black and white: They claim that either we protect speech absolutely (despite the fact that we don’t do this) or we don’t protect it at all.
As a small group of scholars and activists are arguing with increasing force, this is a false choice, and it is manifestly possible to protect free speech — and thus enhance the political and democratic values free speech is meant to promote — while suppressing, or at least not actively encouraging, the efforts of those who want to turn democracies against themselves.
And if we grasp that protections on speech really exist to enhance democratic participation, then it’s easier to see through the claims that digital products such as Bitcoin or Apple’s computer code count as speech. In other words, we’d see that a lot of cries for “freedom of speech” in the Internet era are really just demands for freedom from regulations that wouldn’t be challenged in the offline world.
Computer code on a pedestal
The problem of freedom of speech being used to undermine the democracy it is meant to promote has deep historical roots, but two unfortunate trends have made it especially acute. One trend is that the far right has, at least as far back as the rise of fascism in Italy and Germany, sold the view that “the speech we hate” is somehow the most valuable speech in democracies — and legal scholars and organizations like the ACLU have helped to advance that claim. One of the most famous First Amendment cases in the 20th century was the ACLU’s defense of a proposed march by Nazis on Skokie, Ill., in the 1970s. When Americans learn about this event, they learn that the ACLU was protecting a fundamental democratic value by defending Nazis. Yet how it can be that democracy depends on tolerance for speech that is designed to generate hatred not just of minorities but of democracy itself?
The idea that Nazi speech must be tolerated to have a functioning democracy is provably false. Nazi speech has been outlawed in Germany since World War II, and yet Germany continues to score very high, sometimes higher than the United States, in assessments of the world’s democracies. For example, in the Democracy Index published by the Economist Intelligence Unit, which weighs such factors as civil liberties and the health of political culture, Germany rates as a “full democracy” while the United States is a “flawed democracy.” Are we defending democracy by protecting the speech of Nazis — or are we, as legal scholars Richard Delgado and Jean Stefancic put it, “simply defending Nazis”?
The second unfortunate trend has to do with the blurring of lines between speech and actions taken by corporations. In its infamous 2010 Citizens United decision, the US Supreme Court appeared to assert that spending money on political ads is the same thing as speaking. As in the Nazis-in-Skokie case, the ACLU sided with the party — here, corporate interests — that seemed on its face to be antidemocratic.
But the issue runs even deeper than this case, because wave after wave of technological change has complicated the speech/action distinction. For example, in the last decade or so a doctrine has arisen called “code is speech.” It holds that because computer programs are made of code that looks something like human language, everything done with computer code deserves First Amendment protections, and never mind the fact that the whole point of computer programs is to do things — to take action. The Electronic Frontier Foundation and other digital advocates routinely suggest that “code is speech” is an obvious and well-established legal principle. Apple made this very claim in court filings in 2016, when it said it had a First Amendment right not to provide the FBI with a way of unlocking, under legal warrant, the iPhone of a suspect in the San Bernardino terror attack.
So far, many judges have rejected the “code is speech” doctrine on its face, precisely because computer programs, when they are run, perform actions. And yet, as absurd as the “code is speech” argument is, it is nevertheless a rock-bottom foundation for much commentary about and on social media — commentary that more often than not conflates what most of us understand as “speech” with things as varied as the operation of Google’s search engine, the deployment of facial recognition algorithms, the targeting of protesters with artificial intelligence, and the operation of drones.
It’s an attempt to accord actions with the protections granted to speech — in fact, with more protections than speech itself actually has. After all, the First Amendment allows the government to write laws affecting speech in a variety of ways, depending on the kind of speech and regulation in question. One very rarely hears the complaint that the broadcasting standards issued by the Federal Communications Commission “violate free speech,” despite the fact that large categories of content that most of us would think of as speech in some sense—think especially of otherwise legal, adult pornographic material—are barred from appearing on the public airwaves, even when those public airwaves are licensed to private corporations. Issuing orders to commit crimes, or “falsely shouting fire in a theatre and causing a panic,” as Supreme Court Justice Oliver Wendell Holmes Jr. wrote in a 1919 decision, does not receive First Amendment protection. The claim that speech has absolute protection from law is a species of the assault on governmental regulation that has characterized right-wing political activism for decades.
Into a black hole
In addition to the “code is speech” doctrine, the absolutist approach to speech has made it hard to regulate digital technology under Section 230 of the Communications Decency Act of 1996. Section 230 has recently become a target of both progressives and conservatives, in no small part owing to ambiguity about its meaning and effects. Some of that agitation, especially from President Trump, has obscured the work of progressive activists, lawyers, and legal scholars who have been working for years to push back against the shield of legal immunity the law appears to give to digital platforms like Facebook and Twitter.
The title of lawyer, journalist, and cybersecurity professor Jeff Kosseff’s excellent 2019 book, “The Twenty-Six Words That Created the Internet,” repeats a claim we often hear from the law’s supporters: that social media companies could not exist without it. Those 26 words read: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The law was intended to have two related effects, which in some ways are at cross purposes. One was to encourage platforms to moderate problematic content: Congress “hoped to encourage the companies to feel free to adopt basic conduct codes and delete material that the companies believe is inappropriate,” Kosseff observes. But it was also intended, Kosseff says, to “allow technology companies to freely innovate and create open platforms for user content. Shielding Internet companies from regulation and lawsuits would encourage investment and growth, they thought.”
One of the most fascinating aspects of Section 230 is the lack of agreement, among even the well informed, about what it means. Appearing to endorse the claim that the law is necessary, Kosseff writes that “YouTube, Facebook, Reddit, Wikipedia, Twitter, and eBay ... simply could not exist without Section 230.” Yet in the same paragraph Kosseff rightly notes that those companies operate in many countries that do not have Section 230 protection or anything close to it, and do not come crashing to the ground. In none of them does the Internet “break.” Even if Section 230 somehow “created” the Internet, the Internet nevertheless persists quite robustly where the law does not exist.
Section 230 has become a glaring example of the negative consequences of absolutist views of free speech. Internet companies and their promoters and lobbyists have encouraged courts and companies to believe that they have — and need to have — legal impunity for the content on their sites. Because of this misunderstanding, any editorial intervention or moderation on their part is cast as “censorship,” despite the fact that, as far as the First Amendment goes, it is only government curtailing of speech that qualifies as “censorship.” As soon as one starts to consider the actions of private companies to be censorship, the most ordinary activities associated with publishing — such as editing — can be disingenuously described as censorship.
Section 230 has been used in courts to shield companies from what seem like entirely reasonable legal consequences. One of the most egregious instances is a lawsuit known as Herrick v. Grindr, in which the dating app Grindr repeatedly invoked Section 230 to shield itself from liability for providing a tool that enabled the outrageous harassment of one user, Matthew Herrick. Herrick had met a partner over Grindr. After they broke up, Herrick’s ex set up a fake profile for Herrick on Grindr and another app and sent a stream of hookups to Herrick’s home, telling them that he wanted rough sex and that if he appeared to refuse, this was part of the game and the partner should persist — in other words, directly provoking people to rape and assault him.
Herrick was resourceful enough to stave off physical harm. He called the police more than a dozen times. He also contacted Grindr and the other dating app company, demanding the fake profiles be removed. The smaller app company immediately did so. But despite the fact that his ex’s behavior directly violated Grindr’s terms of service, Grindr repeatedly refused to help. Herrick’s lawyer, Carrie Goldberg, has fought a years-long uphill battle against the company, which has hidden behind the near-total immunity provided by Section 230 — even though prior legal theories around product liability would seem to apply in the case.
One of the most trenchant critics of the way digital technologies are distorting free speech is University of Miami law professor Mary Anne Franks. In her 2019 book “The Cult of the Constitution: Our Deadly Devotion to Guns and Free Speech,” Franks shows how claims of censorship have been “hurled against stalking laws, revenge porn statutes, anti-harassment training, diversity initiatives, blocking users on Twitter, criticism of sexism in video games, pointing out racism, closing comments sections” and much more. Rather than encouraging free speech, she writes, these efforts ”have hobbled attempts to build a truly diverse and robust online free speech culture.”
Franks and her Cyber Civil Rights Initiative also have led efforts to ban so-called revenge porn, the disclosure of sexually explicit images without the subject’s consent. This nonconsensual pornography, she writes, “often plays a role in intimate partner violence, with abusers using the threat of disclosure to keep their partners from leaving or reporting their abuse to law enforcement.” Almost every state now has a law criminalizing nonconsensual pornography, but a federal law harmonizing the state standards has remained elusive. The chief opponents have been the ACLU and the self-nominated digital rights advocates at the Electronic Frontier Foundation. As Franks writes, “The ACLU took the position that no criminal law prohibiting the nonconsensual distribution of sexually explicit images was permissible within the bounds of the First Amendment.” The organization also has made the “slippery slope” claim — arguing that laws against revenge porn could be overapplied, although Franks notes that in briefs opposing the law in Illinois, “the ACLU was not able to point to a single actual case of overapplication” of such laws in other states. Even now, when the state laws against nonconsensual porn have resulted in no documented impacts on freedom of speech at all, technology advocates still make the same “slippery slope” arguments to oppose a potential federal law.
In other words, an abstract commitment to free speech absolutism supports a penumbra of legal untouchability around digital technology — outweighing the actual, concrete, verifiable harms that revenge porn does to thousands of real people. This stretch of the First Amendment, Franks argues, is turning it into “a black hole from which nothing — democracy, autonomy, or truth — will be able to escape.”
Corporate power in disguise
We are supposed to think that the “crisis of free speech” in social media is about individuals being “censored.” Never mind that private companies by definition cannot censor. Never mind that the loudest complaints of “censorship” come from either the companies themselves or from white supremacists and other members of the far right, the same people who insist that hoaxsters and provocateurs like Alex Jones and Milo Yiannopoulos and Jack Posobiec and QAnon promoters have something to say that the “mainstream media” is illicitly suppressing. That these are the same political forces that have long made common cause behind the metastasizing First Amendment should come as no surprise. All dispassionate analysis shows that the political right not only is not being suppressed but is actively promoted and helped in numerous ways by social media.
In fact, it’s most accurate to say that technology platforms do not merely permit white supremacist material and other extremist content but actively distribute it. And what can easily be lost in all this is that Twitter, Facebook, Google, and their supporters have not really been advocating for the freedom of individual speech that the doctrine was designed for, to help promote democracy. Rather, it is the antithesis of that: It is corporate power that they have been seeking to uphold — even as the actions of right-wing trolls, actions that look like speech because they include words, drive marginalized people in droves away from these platforms, often much the worse for wear due to threats of every kind of violence, some of which come to fruition.
Last year the invasive facial recognition company Clearview AI asserted a First Amendment right to distribute its surveillance technology and to collect pictures of hundreds of thousands or millions of Americans it scraped from the Web from public and even apparently private forums. The spirit of “code is speech” lurks in that argument. What Clearview AI does has nothing to do with political speech, and yet the company finds it plausible to claim it has the right to violate everyone’s privacy and sell a profoundly invasive product. In a bastardization of freedom of speech, it asserts the right “to ensure a freedom to surveil at will,” as law professors Neil Richards and Woodrow Hartzog put it in the Globe.
This expansion of speech rights into territory that has nothing to do with speech is particularly visible in the rhetoric surrounding Bitcoin, the digital currency birthed by far-right online agitators who call themselves crypto-anarchists. Part of what makes Bitcoin distinct is its use of so-called blockchain technology. Blockchain technology is said to be distributed and decentralized, which in this case means that anyone anywhere can run the software that checks the authenticity of transactions and mines Bitcoin in the process. That means the only way to stop it is to shut down every computer that could run it. That makes it very hard to control, and even legislation making it illegal would be difficult to put into practice.
It’s true that a software process that is difficult to stop is a new thing in the world. But does that justify the way Bitcoin promoters describe it as “censorship resistance”? In fact, the co-chair of a law firm serving the cryptocurrency industry, building explicitly on the “code is speech” position, has claimed that “Bitcoin is speech.” Yet blockchain technologies are not on their face anything like political speech at all: They simply produce ledger entries, transaction verification, tokens. The idea that laws or regulations that stopped that technology would be “censorship” can gain traction only in a world that has lost track entirely of the nature of political speech and its role in democratic governance. Indeed, it’s hard not to pause over the fact that the crypto-anarchists who call blockchain “censorship resistant” have only contempt and often outright hatred for democracies, so it’s odd for them to be gesturing at a core democratic value as if it should encourage others to support the technology.
Even today, more than five decades after his death, Marshall McLuhan is widely considered the visionary thinker who most clearly foresaw the Internet. McLuhan was an erratic and self-contradicting writer whose ideas like the “global village” and “the medium is the message” often sound far more visionary than careful reflection can support. Much less well known, but arguably far more important, is McLuhan’s teacher Harold Innis, the Canadian economic historian and media theorist whose learning was vaster and whose writing was far more precise than that of his pupil. In a 1951 meeting in Paris, Innis delivered a paper called “The Concept of Monopoly and Civilization.” In that wide-ranging paper, Innis worried that large newspapers determined what people thought about across entire continents, creating what he called “monopolies of knowledge.” The paper was not published until 1995, by which time digital mass instantaneity was finally beginning to show the consequences that his contrarian thinking had predicted: In the name of freedom, a technological framework has been built that the citizens of democracies have very little power over. And the very power to shape that technology has somehow been declared “censorship” by people who mean to deprive democracy of some of its most important features.
It’s welcome that Donald Trump and his QAnon supporters, and even entire products like Parler and 4chan, have been “deplatformed” since Wednesday’s insurrection in Washington, D.C. Twitter, Facebook and others rightly hold Trump responsible for stoking the violence. But they’re also responsible for it, because they served as tools of antidemocratic propaganda. It is time to ask hard questions about whether these products are in fact compatible with democratic governance. It is not clear that private companies should be in position to decide whether to ban elected national leaders from their platforms. That suggests not that the bans are wrong but rather that the existence of the platforms, at least in their current forms, is. Reddit, which moved to a heavily moderated model in the wake of earlier scandals, suggests one form social media could take in the future. But there is no reason to think there can’t be other forms of it.
It remains incumbent on all of us to make democratic values central online, and put them ahead of any idea of technological progress or free speech pursued as an absolute and antidemocratic goal.
This means focusing our activism and our legal system on strengthening democracy and its institutions, not handing more and more power to those who pretend to champion democracy while doing everything they can to undermine it. Technology can be useful toward those ends, but only when our uses of it are based on a clear understanding of our core values.
David Golumbia, associate professor of digital studies at Virginia Commonwealth University, is the author of “The Politics of Bitcoin: Software as Right-Wing Extremism.”