This editorial has been updated to reflect breaking news.
Why was Donald Trump allowed to organize and incite a violent invasion of the nation’s Capitol on Wednesday, in part via missives on Twitter, YouTube, and Facebook? The simple answer, which is that social media companies failed to delete related posts or suspend his account until the attack had already happened, may seem satisfying. But it’s neither the full story nor anything close to a real solution.
Because for months, these social media platforms spread and let fester conspiracy theories about election fraud via posts that reached millions of people. For years, they allowed white supremacist and right-wing extremist vitriol to propagate. And while today Trump is a major superspreader of the pandemic of disinformation, hate speech, and organizing of violence that plagues democracy itself, he is not even close to the only one. By cutting off Trump — in Twitter’s case, for 12 hours initially and then on Friday via a permanent ban, and in Facebook’s case till at least the end of his term — the social media CEOs whom we have entrusted with governing the new digital public squares took only one source of contagion out of a world where a deadly epidemic continues to rage.
Facebook, Google (which owns YouTube), and Twitter are among the most cash- and talent-rich companies on the planet. They have all the tools at their disposal to isolate and quarantine racist and violence-inflaming content. They know how to put the brakes on vitriolic content before it goes viral; they know how to take down deliberately false, dangerous posts made by highly influential people. They know how to fact-check or outright ban lying advertisements. They know how to permanently suspend accounts of people who abuse the system. We know this because they have done it on occasion — in highly limited circumstances and for small spans of time.
And yet these companies have abetted the spread of fascist right-wing propaganda, lies about mail-in voting and election fraud, misinformation that leads people to endanger themselves in the pandemic, and violent white supremacist organizing. Isolated efforts — such as Twitter’s banning of Steve Bannon for suggesting Anthony Fauci should be beheaded, Facebook’s shutting down the account of the group that organized an attempted kidnapping of Governor Gretchen Whitmer of Michigan, or YouTube’s policy as of December to no longer carry content asserting 2020 election fraud — have proved to be either too little, too late, or limited to superficially dealing with an immediate bad situation instead of addressing the problems being bred in the broader environment. It’s akin to breaking up a house party as a substitute for an actual plan to test, socially distance, and distribute vaccines in a pandemic.
The longtime defense has been that the platforms are marketplaces of ideas. That they don’t want to curb free expression. But the underlying reason has become obvious: Incendiary content yields more followers for people who post on the platforms, and algorithms draw audiences to such content in an unvirtous cycle, while advertising revenue grows for content and users that generate more engagement.
The adage Justice Louis Brandeis offered in a 1927 US Supreme Court decision, that the answer to vile or fallacious speech is more speech, did not anticipate the digital age. On social media platforms, where rage and algorithms drive many of the most outlandish lies, attacks, and conspiracy theories to rapidly go viral, lies travel all the way around the world within minutes, and truth often fails to follow at all. Such online misinformation can hardly be treated as harmless anymore; it is the proven tool of autocrats who cling to power and attempt coups. Taking down such posts retroactively, as has been Facebook and YouTube’s dominant approach, or flagging them as disputed, as Twitter does, doesn’t undo the damage done in the original conflagration. And shutting down Donald Trump now, although necessary, rings like little more than corporate lobbying and PR strategy in the twilight of an administration. How convenient to find one’s backbone when a new government less sympathetic to the right-wing mob is coming to town — and when public sentiment has made sympathy for the devil less appealing.
Mark Zuckerberg, according to a spokesperson, is “appalled” by the events that transpired Wednesday. But the fate of democracy and the safety of the public and public officials should not hinge on one Harvard dropout’s feelings, even if he had the compassion and judgment of Mother Teresa. Consequential decisions about what kinds of posts get flagged or removed and which people get their accounts suspended on a platform that reaches billions of people should not be subject to the politics or sentiments of one guy who happens to be the largest shareholder.
Zuckerberg “deservedly has gotten a lot of critique for his failure to address this earlier and more quickly,” said Eli Pariser, author of “The Filter Bubble” and codirector of the Civic Signals project at the National Conference on Citizenship, in an interview. “But I don’t think any one person would be positioned to do this right,” Pariser told the Globe editorial board, regarding the disproportionate power Zuckerberg has come to wield.
Shaking our fists at Zuckerberg and Twitter CEO Jack Dorsey also won’t fix it; what’s broken here is more fundamental and the responsibility of political and civic leaders. Social media company ownership structures and business motives, paired with an unchecked oligopoly, are inherently at odds with the public interest. The companies must have more competition, not just from similar for-profit companies like Parler and Reddit that share similar advertising imperatives, but also from social networks that have the mission to curate better content and that are well-funded enough to create dynamic public squares, with reliable information and less vitriol — alternative venues for virtual social connection.
For people to benefit from the upsides of social media while reducing the threat to public safety, two things must happen next: First, the government must break up the companies and regulate them to allow for competitive mission-driven platforms to emerge whose motives can be aligned with serving the public good. And second, a new kind of social media platform must be funded by public and philanthropic sources, in a financing model akin to that of public broadcasting. Making sure that people can easily transfer data and contact groups between social media networks, possibly via regulation, is one key to making this viable.
What recent events underscore is that existing social media companies cannot and should not be trusted to be brokers of public discourse and safety in a democracy. They have proved ill-matched to the task of cultivating a robust online public square for civic dialogue centered on truth, and instead have created playgrounds for extremists, conspiracists, and propagandists.
If new social media networks free of such ills sound lofty, it might be because it’s hard to imagine what has never existed. But that, warns Ethan Zuckerman of UMass Amherst (former director of MIT’s Center for Civic Media), means “[W]e are ceding the future of the internet to the companies that have already taken power.” One lesson of the Capitol siege is that there is a grave danger not just in ceding that future, but in accepting business as usual in the present.
Editorials represent the views of the Boston Globe Editorial Board. Follow us on Twitter at @GlobeOpinion.