fb-pixel Skip to main content

Last week, the tech industry stepped up to the plate. After the pro-Trump attack on the Capitol, President Trump was banned from Twitter, and Facebook banned his accounts at least until President-elect Joe Biden is sworn in. Parler was effectively eliminated overnight by Apple, Google, and Amazon Web Services. Over 70,000 accounts linked to QAnon were shut down by Twitter. The unprecedented actions have shaken understandings of the boundaries of tech companies in their response to harmful activities enabled by their platforms.

It should now be clear that chatter online can directly incite, facilitate, and increase the likelihood of violence offline. But this is not the first time this problem has reared its ugly head. Online disinformation directly incited mob violence across India. Hate speech on Facebook fueled ethnic cleansing in Myanmar. We know that perpetrators of far-right terrorist attacks consume hateful content online and are often encouraged by online communities.

Advertisement



But this time it wasn’t users in an obscure forum egging on one another. It didn’t take investigations carried out by the law enforcement community to trace violence back to online activity. It presented itself starkly, for the world to see, with organized mobs following messaging from none other than the president — the most powerful man in the world using Twitter as a loudspeaker. It took the unimaginable to push Big Tech to take big action.

It often takes tragedy for the tech sector to be compelled to act. This happened after the white supremacist rally in Charlottesville, where one counterprotester was killed, when services like GoDaddy and OkCupid took action to remove white supremacists from their services. After the mosque attacks in Christchurch, New Zealand, where 51 were killed, tech companies and many governments came together, at the behest of Prime Minister Jacinda Ardern, to boost efforts to respond to terrorism online. The response of Big Tech last week was certainly remarkable and important, but it came at the 11th hour.

Advertisement



The tech sector has systematically overlooked the threat of far-right extremism. Six years ago, the tech companies took then-unprecedented action to make it significantly harder for ISIS and their supporters to spread and incite violence. In the years that followed, I sat in rooms with decision-makers at the tech companies. When I offered evidence on the scale of violent far-right activity on these platforms, I was told that action wouldn’t happen. There were no far-right terrorists or groups designated by the US government as Specially Designated Global Terrorists (SDGT) or on the US Foreign Terrorist Organization (FTO) list. I was told it was too political. I was asking them to take action against the KKK.

The tech companies have lacked will to respond to far-right violence, unless instructed to by the US government or unless there was a clear division between politics and security threats. This approach might be effective if the US government had delivered on its responsibility to designate far-right terrorist individuals and organizations on the SDGT and FTO lists.

In absence of this, the tech sector has so often waited either for tragedy or for governments to impose legal and commercial imperatives to act.

There has never been a clearer moment than the events of last week to show that the tech sector should take a stance on all forms of incitement to violence.

Advertisement



The impact of these actions will be major. We know from efforts taken to de-platform extremist movements, namely ISIS, that de-platforming — which limits the spread of influential voices inciting violence — can have an impact.

A 2015 Brookings Institution report showed that suspensions of ISIS accounts significantly slowed the overall dissemination of its propaganda. Suspensions limit opportunities for recruitment and whittle away at the audience base. In 2019, the European Union led an effort to degrade ISIS networks on Telegram in a single “day of action,” wiping out thousands of accounts associated with the terrorist organization.

Last week’s events should propel a reckoning within the tech sector to prioritize the protection of online communities from incitement to far-right extremist violence, to deliver effective moderation, and prevent the abuse of their platforms by users affiliated with far-right extremism.

This goes well beyond the major social media platforms and needs to be cross-industry. It needs to include streaming, e-commerce, hosting services, and more. De-platforming is crucial, but it is not enough on its own. In preventing abuse of platforms by far-right extremist actors, we need to accept two things: First, there will always be content that falls into the gray zone and will not be liable for removal. Second, there will always be some spaces on tech platforms that are not liable for moderation.

For these cases, in addition to their moderation efforts, tech companies are in a unique position to deliver positive interventions to users who may be at risk of getting involved in violence. In analyzing extremism online, my organization, Moonshot CVE, recently evaluated Facebook’s first attempt to redirect white supremacist and neo-Nazi users toward services that can facilitate their exit from hate movements. We found that Facebook successfully functioned as a conduit between over 2,000 high-risk individuals and support services. These are promising results and demonstrate the possibility for one of the largest corporations in the world to de-escalate people from violence.

Advertisement



On other social media platforms, we have found that American audiences seeking extremist content and disinformation online are willing to engage with messaging aiming to de-escalate them from violence. They are receptive to messaging telling them their choices matter; asking them to pause and reflect on their choices; appealing to their ties to their neighbors and the people they love.

Importantly, we have found these online audiences are interested and in need of psychosocial support, especially in times of crisis. Our early studies show that far-right extremists are 48 percent more likely than the general public to engage with psychosocial support content. Those seeking to join far-right extremist groups like the KKK are 115 percent more likely.

America is in crisis, and tech companies can help. They need to go beyond just content moderation and help Americans get the resources they need to move beyond violence and toward a safer society.

Vidhya Ramalingam is founder and CEO of Moonshot CVE.