fb-pixel

The horrifying terrorist attack in New Zealand on Friday, during which a white supremacist gunman mowed down 49 worshipers at two mosques while live-streaming the attack on his Facebook page, came three days after the world marked a sadly relevant anniversary. It was 30 years ago Tuesday that computer scientist Tim Berners-Lee sketched out his plan for an invention he called the World Wide Web.

That invention has been, on the whole, an enormous force for good. But it has also introduced a host of unanticipated new social problems that the attack in Christchurch throws into stark relief.

Advertisement



The Web, and in particular social media platforms, has provided a fertile environment for hate groups to form and grow, and for far-flung and isolated racists to find a community that affirms and reinforces their beliefs. Although many details remain unclear, it appears that the accused shooter, Brenton Harrison Tarrant, 28, was part of the international online community of white supremacists. He may have discussed some of the planning for the attack in a chat room, according to reports, and social media also gave him a venue to broadcast his slaughter for 17 minutes, despite the safeguards tech companies say they employ against violent content.

Social media companies haven’t done enough to disrupt these groups. Any group or individual could be targeted. Friday’s targets were Muslims. But social-media radicalization and incitement has also been linked to right-wing mass shooters in Norway, South Carolina, Pittsburgh; foot soldiers of genocide in Myanmar; a would-be assassin in Virginia; and ISIS volunteers in Syria.

It’s no longer possible to brush off such events as isolated incidents or to minimize the role of social media companies in enabling them. The Web has poured gasoline on burning social divisions, created whole new categories of hate (there was no movement of “incels” to foment misogynistic violence before the Internet helped them find one another), and reduced the in-person interactions that can moderate extremism.

Advertisement



Tech companies can’t be allowed to shrug off the violence and social division abetted by their products as if it all would have happened anyway— as if they’re as blameless as the phone company when crooks make a phone call.

“The global network of white nationalist extremism depends on the framework of social media,” said David Ibsen, executive director of the Counter Extremism Project. “The inaction of social media platforms in addressing this problem serves to perpetuate it.”

On Friday, the social media giants issued their version of “thoughts and prayers,” even as it reportedly took Google 12 hours to remove all versions of the attack video from YouTube. Twitter and Facebook also struggled to remove the video, which was repeatedly copied and reposted with tiny alterations that may have helped the copies avoid automatic detection. Platforms have to do better at preventing the spread of violent imagery. They should employ human moderators if their artificial intelligence continues to fail, or shut down services they can’t run responsibly.

But it’s doubtful Silicon Valley will ever truly police itself. And while terrorism is clearly the most urgent problem, that’s not where the problems caused by social media end. Trolls and hackers and bots and bullies wear away at society and democracy in less overt ways, and they need to be addressed too. Governments, ultimately, have to protect their citizens from Silicon Valley with more muscular regulation.

Advertisement



Earlier this week, when Berners-Lee (now an MIT professor) reflected on his creation, he acknowledged that for all its successes, the Web “has also created opportunity for scammers, given a voice to those who spread hatred, and made all kinds of crime easier to commit.”

Yet, he wrote, “it would be defeatist and unimaginative to assume that the web as we know it can’t be changed for the better in the next 30 [years].”

We can’t wait 30 years for a better Web. Silicon Valley needs to start now.