Much of our political and social turmoil is being fueled by social media. While online movements involve and affect real people, what’s often invisible is the degree to which bots — automated accounts that post independently or under the direction of human controllers — stoke the flames of division behind the scenes.
You might be in a toxic relationship with bots without realizing it.
A recent report from the University of Oxford shows just how widely bots are used to manipulate public opinion on both mainstream social media platforms and right-wing services like Parler. In 2020, Russian trolls once again tried to influence a US presidential election, amplifying false claims that the election was stolen. China used bots to spread misinformation about the COVID-19 pandemic in order to deflect criticism of itself and cast doubt on the responses of the United States and other democracies. Altogether the Oxford researchers identified at least 57 countries using bots in organized campaigns to sway political discourse at home or in rival nations.
Bots frequently amplify misinformation and conspiracy theories shared by real people, giving a megaphone to what might otherwise be a lone misguided voice. They hijack conversations on controversial issues to derail or inflame the discussion. For example, bots have posed as Black Lives Matter activists and shared divisive posts designed to stoke racial tensions. When real people try to make their voices heard online, they do so within a landscape that’s increasingly poisoned and polarized by bots.
I have spent much of my career developing artificial intelligence to identify online bots. My colleagues and I are in a computational arms race: As the tools we build to track down fake accounts improve, so do the bots. As important as our work is, using software tools to find individual bots won’t eliminate the problem. Social media platforms must act to root out bots on a systemic level.
What makes bots increasingly dangerous is their sophistication and scale. Artificial intelligence has become so good at mimicking human speech that it’s hard for the average user to tell what’s real and what’s fake. Last fall, an account using the advanced GPT-3 language processing algorithm was released on Reddit. The conversations it had were so human-like that it took more than a week before users realized they were interacting with a bot. You can see for yourself just how sophisticated this AI is on sites like Talk to Transformer.
Bots also have tremendous reach. While the average person can share misinformation with dozens or perhaps hundreds of friends on social media, an army of bots can spread the same content to millions in a matter of hours through a steady drumbeat of posts. A 2018 study found that just 6 percent of Twitter accounts, all of them suspected bots, were responsible for spreading 31 percent of misinformation around the 2016 election. In many cases, the false information began trending in less than 10 seconds.
Simply removing bot accounts from popular platforms isn’t enough. Facebook deleted nearly nine billion bogus accounts in 2018 and 2019, but the company still estimates that at least 5 percent of its users are fake. Organized misinformation campaigns have also been known to hack real accounts and convert them to bots, taking advantage of these accounts’ existing networks and credibility.
Instead of playing whack-a-mole with individual accounts, social media platforms need to zoom out and attack the bots en masse. As AI becomes more sophisticated at mimicking humans, the best way to spot bot activity is by looking at the context of a post. Has a hashtag risen out of nowhere, driven by an interlinked network of suspicious accounts? Do a group of users post about a single topic ad nauseam, echo similar talking points, or repeatedly divert unrelated conversations to a particular topic?
When the algorithms that decide what you and I see are a black box, it’s difficult to stop misinformation from spreading and to gauge the authenticity of what we’re exposed to online. Only the companies themselves have the necessary back-end data, such as accounts’ IP addresses and posting patterns, to provide context about why a specific hashtag is trending or where and how a piece of viral misinformation started.
Once bot campaigns are identified, social media companies can take several steps to hinder them while respecting the free speech of human users. They could require a simple CAPTCHA test before publishing any post containing a hashtag that is largely being spread by bot accounts. They can give users more context about the information they encounter, such as the country where a viral hashtag originated or patterns in the prior posting history of other accounts. They could even experiment with computational techniques that generate a summary of each user’s activity, pulling the curtain back on accounts that post relentlessly about a single topic or tend toward inflammatory content. There are also changes companies can make behind the scenes, such as tweaking their algorithms to de-prioritize posts from bot-driven campaigns in users’ news feeds. Facebook did this temporarily in the aftermath of the 2020 election, and traffic to more authoritative news sources increased as a result.
Thus far, social media companies have been reluctant to fight bots as aggressively as possible. Twitter recently began labeling state-sponsored media accounts, such as Russia Today, which are often used to post content that is then amplified by bots. However, this small step came only after prolonged pressure from users and the US government. The reality is that platforms have ample incentives to continue promoting divisive content and misinformation as long as it engages their audience. All activity, whether authentic or not, is good for their bottom lines.
Ultimately, government regulations may be necessary to make platforms safeguard the integrity of online discourse. However, regulation does not mean censoring content, an approach that has backfired in India and other countries. Instead, governments should pursue rules that encourage transparency, such as requiring platforms to reveal data about the geographic origin or posting behavior of bot-associated accounts, hashtags, or viral content. Regulations could also require platforms to explain their decisions to block or remove content in the event that they do so. Such transparency can offer users important context for what they see online without limiting free speech. Ultimately, our long-term goal should be to train the population to think more critically about their information sources, and transparency gives people the information to make these judgments.
Reestablishing space for productive public discussions around politics, climate change, public health, and racial justice requires tougher tactics against the Internet’s bot infestation. If social media companies take responsibility for their platforms and stop letting bots drown out and derail the conversations of real people, then can we get back to the founding principle of the Internet: the authentic and free exchange of ideas.
Victor Benjamin is an assistant professor of information systems at the W.P. Carey School of Business at Arizona State University.