scorecardresearch Skip to main content

Online hate is spreading, and Internet platforms can’t stop it

Suspects Cesar Sayoc (left) and Robert Bowers.Left, Broward County Sheriff’s Office via The New York Times; right, AFP/Getty Images

The suspected shooter who killed 11 people at a Pittsburgh synagogue left a trail of virulent anti-Semitism on a social network frequented by right-wing extremists. The man who allegedly mailed bombs to Barack Obama and other prominent figures had been reported for making threats on Twitter.

When online hate spills over into real-life violence, it highlights what is arguably the greatest technological challenge facing the Internet: Media companies have not figured out what to do about the threats and abuse that pollute their platforms.

Despite the promise of artificial intelligence, algorithms have so far proved no match for the nuance of human language; monitoring posts requires not just finding specific words, but understanding meaning, intent, and context.

Advertisement



The Anti-Defamation League, which fights anti-Semitism and other forms of bigotry, is so concerned about online hate that it has opened a Silicon Valley office to work with tech companies on the problem.

The major challenge facing developers: how to establish a bright line between what is acceptable and what is not.

“You need to have a clear definition of hate speech. It’s really hard to define, and it’s really hard to have the data that can build the models that can be trained to detect this kind of thing,” said Daniel Kelley, associate director of the Center for Technology and Society at the ADL.

Allowing hate speech on popular platforms can traumatize regular users and risks normalizing points of view that would have otherwise been confined to the dark margins of society. Attuned to these concerns, social media companies are staffing up on human content moderators Facebook alone has hired thousands in the past year.

“They wouldn’t do that if the technology was there, because they’re tech companies,” Kelley said.

The ADL said it found 2.6 million tweets “containing language frequently found in anti-Semitic speech” in the runup to the 2016 election, and it said recent analysis shows a “marked rise in the number of online attacks” in advance of next week’s midterms.

Advertisement



Even elite programmers have struggled to build tools sophisticated enough to root out hate speech. In a study released this fall, Finnish and Italian researchers said they were able to thwart seven algorithms designed to catch hate speech through techniques like adding typos, running words together, and appending the word “love” to otherwise toxic messages.

Shiri Dori-Hacohen, chief executive of the Western Massachusetts company AuCoDe, which is working on ways to detect online controversy using artificial intelligence, said the problem highlights the limitations of today’s technology.

“Language is complicated, and computers are a far cry from being able to figure it out — even with the help of some of our smartest humans,” she said.

Dori-Hacohen doubts that even an effective technology would make a difference in the current climate — there is just too much hate out there, she says. She also worries that platforms with strict rules will simply drive hateful users to other services.

“Even if you had humans policing it — which you couldn’t possibly do at the scale that it’s happening today — so long as you have online enclaves that allow that permissive behavior, then AI wouldn’t even solve the problem,” Dori-Hacohen said.

And when radical attitudes fester in more obscure reaches of the Internet, it makes it harder for society to track the threat or reckon with it.

Advertisement



Robert Bowers, the alleged shooter at the Tree of Life Synagogue in Pittsburgh, for example, was a user of Gab, a service to which many extremists had flocked amid complaints that more mainstream services were censoring their views.

The site, which has defended itself as a forum for unbridled speech, is down, at least for the time being, because several of its service providers refused to work with Gab after the violence in Pittsburgh. Gab’s users are again scattering to more-obscure online platforms.

Using computers to moderate posts on the Web also risks undermining important political discourse about racism and xenophobia.

It could inadvertently remove content that references hate speech, such as a black user’s comment describing the experience of encountering a racial slur. It may also deprive well-meaning users of the opportunity to publicly confront ideas that they find offensive.

“The problem with broad and vague definitions is that it’s subject to including things like political speech and dissent, because one person’s view of what’s demeaning to a group could be another person’s view of political speech,” said Danielle Citron, a law professor at the University of Maryland and author of the book “Hate Crimes in Cyberspace.”

Citron, who said she herself has been a target of online anti-Semitism, has often argued that platforms should be cautious about banning categories of speech.

But in the wake of the Pittsburgh shooting, she said she is grappling with the question of when hateful language becomes incitement.

Advertisement



“We need to be really vigilant about speech that gets really close to reducing people to non-humans and calling for their destruction. Even in vague ways, I think that is harmful and dangerous and troubling. It doesn’t mean it has to be removed. We just have to follow it,” she said.

The American Civil Liberties Union, wary of the power that the biggest social networks wield, has said it is wary of calls for Facebook to censor speech. The organization believes that the use of artificial intelligence is “only likely to exacerbate the problem” by removing content that should remain in the public sphere.

For law enforcement, using technology to monitor social media for potential threats can be as fraught as it is for the platforms themselves. A 2017 effort by Boston police to spend up to $1.4 million on software to monitor social media posts for public safety threats on the Internet was scrapped amid complaints by civil liberties groups that the program amounted to unnecessarily broad surveillance.

Edward F. Davis, the former Boston police commissioner who left the department in 2013, said authorities have an array of technological tools for monitoring social media posts, but the public needs to arrive at a consensus about how they should be used.

When police find threatening posts, they at least have the option to open an investigation into the writers to determine “whether they have the means to commit the crime, whether it’s just rhetoric, or whether they’re actually going to do something.”

Advertisement



There are some hopeful signs for the future of online discourse. Twitter says it has found some success in using a combination of machine learning and human reviewers to find problematic “trolls” and diminish the prominence of their tweets.

In searching for those who disrupt healthy conversation, Twitter looks for clues not in what users say, but how they behave. For instance, people who sign up for multiple accounts at the same time might be less likely to have their tweets suggested to other users in searches.

The company said in May that such moves had reduced reports of abuse.

Slater Victoroff, chief technology officer of Indico, a Boston startup that uses artificial intelligence to make sense of large sets of unstructured data, said today’s efforts could lead to more sophisticated monitoring systems that deploy both human and artificial intelligence.

“We need to have some kind of objective metrics and come to a socially accepted definition of this before AI is a valid solution,” he said. “It’s a really hard discussion to have, but it’s one that we need to have.”


Andy Rosen can be reached at andrew.rosen@globe.com.