Online titans such as Facebook, Twitter, and YouTube might seem invincible. But the Supreme Court will soon take up a pair of lawsuits that could severely undermine all three companies, while radically changing the way millions of people use social media.
Both cases revolve around the deaths of two people killed in separate terrorist incidents. Relatives of the victims say social media companies tolerated hateful messages posted by the terrorist group ISIS, and in some cases used software algorithms that targeted users with a steady stream of radical messages that goaded them into violence.
The high court agreed this week to hear both cases, setting up a dramatic challenge to the most important federal law governing the Internet — Section 230 of the 1996 Communications Decency Act. A victory for the plaintiffs could mean social media companies face financial liability for extreme, offensive, or libelous material posted by users. And that could hamper free speech, if social networks impose stricter limits on what users can publish online. And it could cost the companies billions in advertising revenue, by hampering their ability to create personalized ads.
Section 230 was devised after the New York State Supreme Court ruled in 1995 that the now-defunct online service Prodigy could be sued if one of its users posted defamatory statements. The ruling was a blow to online free speech. If any website operator could be sued for an insult posted by a visitor in the comments section, today’s social media would never have been born.
Congress, not known for its technical savvy, devised a clever solution. Section 230 allows online providers to exercise control over what they publish, while shielding publishers from legal liability for messages posted by their users. So, a service such as Facebook can weed out pornography and racist rants. Millions of mainstream users feel safe there, and so do advertisers who pay billions to Facebook to promote their products. And if a nasty posting does slip through, Facebook can’t be sued for the user’s bad judgment.
This clever balancing act has enabled US social media companies to dominate the world. But lately, Section 230 has acquired a horde of enemies, from both the left and the right. Progressives complain it lets companies get away with harboring misinformation and hateful speech. Conservatives say it gives social media companies too much power to censor right-of-center view points. Republican politicians in Texas and Florida have even passed laws that would forbid social media firms from deleting controversial posts. These laws run head-on into Section 230 and are bound to end up at the Supreme Court someday.
But first comes a case that takes up a very different question. In Gonzalez v. Google, the family of a woman killed in a 2015 ISIS terrorist attack in Paris says that Google’s YouTube video service bears responsibility, not simply because the attackers watched YouTube videos, but because Google actively encouraged the killers to watch.
YouTube, like other social sites, uses algorithms that identify what viewers like, then shows them more and more videos of the same kind. The aim is to get viewers to spend more time online, and view more of those profitable ads.
The plaintiff alleges that this algorithmic promotion of violent videos makes YouTube responsible for inciting criminal behavior. Indeed, the family says YouTube profited from showing terrorist videos by running paid ads alongside them, a process called “monetization.” When YouTube monetizes someone’s videos, it shares the revenue with the video creator. In effect, Gonzalez says, YouTube provided financial support to ISIS.
A lower court rejected those arguments, but the Supreme Court will hear Gonzalez’s appeal, along with a similar case, Twitter et al. v. Taamneh. In this case, the relatives of a Jordanian citizen killed in a 2017 ISIS attack in Istanbul say that Twitter, Facebook, and YouTube are liable for the attack because they were aware that terrorists used their services but didn’t do everything in their power to weed them out.
The Taamneh case differs from Gonzalez in one respect, in that it doesn’t single out the use of algorithms to promote terrorist content.
Attacks on algorithmic promotion are nothing new. Facebook whistle-blower Frances Haugen called for an end to the practice in testimony before Congress last year. Democrats in Congress have proposed a law that would strip Section 230 protection from any online postings that are promoted via algorithms. That way, if Facebook’s algorithm promoted false claims that a voting machine company rigged the 2020 elections, the company could sue Facebook for sharing the message.
But if the Supreme Court sides with Gonzalez, then suddenly, any algorithmically shared post on Facebook, Twitter, YouTube, or TikTok is a lawsuit waiting to happen.
Eric Goldman, professor of law at Santa Clara University, thinks an algorithmic crackdown is an awful idea. Goldman argues that these decisions, while automated, are no different from a newspaper editor’s decision to play up one story, and bury another on page 12.
“The fact that it’s done by machines or is highlighting some content over others, none of that should really matter,” he said.
Will Duffield, policy analyst at the libertarian Cato Institute in Washington, said social media companies might try limiting algorithmic promotion to uncontroversial topics like knitting or stamp collecting. Meanwhile, there’d be no more recommendations for posts involving stuff like politics, religion, or sex.
But imagine you’re a Muslim and Facebook stops sending you posts related to your faith. Now do the same for people interested in abortion, transgender rights, or police reform. Next thing you know, warned Duffield, users will sue, alleging discrimination against Muslims or Black Lives Matter activists or opponents of abortion. Completely abandoning algorithmic promotion might be the only fair solution.
What if YouTube could no longer direct you to the next funny video, or Twitter didn’t send you outrageous tweets from politicians or film stars? You’d spend a lot less time at YouTube or Twitter, which means a lot less time viewing those lucrative ads. Multiply by millions of users, and watch as social media ad revenues plummet.
Facebook’s audience declined last year, and growth remains stagnant, as young users migrate to TikTok. Moreover, the company predicts ad revenue will fall $10 billion this year, largely due to new pro-privacy features of Apple’s iPhones that make it harder to track user behavior for ad customization. For Mark Zuckerberg’s company and others, a ruling for Gonzalez could pose an existential threat.
Could it come to that? Conservative Supreme Court Justices Clarence Thomas, Samuel Alito, and Neil Gorsuch have all suggested they think Section 230 has been interpreted too broadly and gives social media companies too much power to restrict speech. With just two more votes, which could even come from the court’s liberal wing, the social media companies will have a big problem.