The missteps of Facebook and Google after the Las Vegas shooting raise fresh questions about the companies’ capacity to manage the responsibilities they’ve assumed. The Silicon Valley giants serve as the primary conduit of information for millions of Americans, but they circulated obviously false and malicious stories that misled their viewers about the attacker’s identity and motives. The tech industry, already in a harsh spotlight for allowing Russian intelligence officials to manipulate the US election on its platforms, needs to adopt a more proactive approach to accuracy and, at the least, put human editors in charge of vetting news.
Facebook and Google generate links to popular stories using proprietary algorithms. Such formulas can be gamed (just ask, say, Rick Santorum), which has led to a cat-and-mouse game: The companies constantly claim they’re refining their algorithms to weed out bad actors, but the scammers and trolls are never far behind. That phenomenon was on display again after the shooting, when items from a Russian propaganda outlet appeared among Facebook’s trending topics. The fake news items falsely accused one individual of the shooting, and falsely reported a link to the ISIS terrorist group. Google’s top recommended link for a period of time was to a page on 4chan, the notorious troll message board, blaming the wrong person for the massacre.
Those are serious, inflammatory mistakes in the midst of a national trauma, and they’re not isolated incidents. Facebook briefly gave humans a role in vetting trending news links, but retreated after its editors were accused of harboring liberal bias. That was the exact wrong response: To the extent the criticisms were valid, the solution is better editing, not no editing. Without human intervention, the company is now circulating misinformation from sources like Sputnik, along with fringe outlets like Alt-Right News. The peculiar result is that some of the nation’s most influential news sources are publishing information any freshman journalism student would know to avoid.
The missteps come just as the companies are facing questions about their role in last year’s election. Facebook sold ads to buyers aligned with the Russian government, which were used to plant false or incendiary stories in swing states. Silicon Valley’s lax standards on accepting ads have now drawn congressional scrutiny and promises to reform. Facebook pledged to hire 1,000 employees to vet ads, for instance, though it’s not clear if that would do anything to prevent fake news from appearing on its trending topics page.
Human editors will make mistakes too, and perhaps someday the companies will perfect automated news editing. But it’s unconscionable for them to continue treating the nation as guinea pigs while they work through the kinks in their badly flawed news algorithms. America is in a volatile state, partly because of the divisive disinformation that Silicon Valley has unleashed. The family of the man wrongly accused by the 4chan trolls received death threats; the next time Google and Facebook amplify misinformation, the outcome could be much worse.