Many a meltdown took place recently as thousands of Twitter users discovered that thousands of their followers on the social media platform had vaporized overnight.
Conservative pundits and alt-right activists were among those extra bothered by the sudden virtual exodus.
“I've lost close to 1,000 followers offer the past few hours,” tweeted recently deverified white nationalist Richard Spencer. “Major purge underway.”
What was happening that had decimated so many alleged fanbases and left so many crying #TwitterLockout on Twitter and elsewhere?
Call it Twitter’s version of spring cleaning. Around Feb. 20, the network flushed thousands of automated accounts — more commonly known as “bots.”
The company defended the purge as “part of our ongoing, comprehensive efforts to make Twitter safer and healthier for everyone.”
“Twitter’s tools are apolitical,” read a statement from the company, “and we enforce our rules without political bias. As part of our ongoing work in safety, we identify suspicious account behaviors that indicate automated activity or violations of our policies around having multiple accounts, or abuse.”
Spencer and others soon reported follower numbers creeping back up as those locked out had to sign in to re-verify their fleshy personhood.
While Twitter’s tools are apolitical, the problem they address is inextricable from politics. The influence of hordes of bots and fake accounts on social media are at the center of concerns about Russian interference in American elections. The disinformation circulated by such accounts is seen as corrosive and polarizing to the country’s political conversation.
The Parkland school shooting has highlighted the severity of this rift, and the efficacy of bots (largely operating from Russia) in helping fiction masquerade as fact. The rise of conspiracy theories and hoaxes into the algorithmic mainstream has lately tightened into something more like a dystopian first-response unit. No sooner were children who survived the shooting speaking out on camera than there were bots claiming they were actors in a “false flag” operation.
“There is a path out of this mess,” writes Joshua Topolsky in a piece for The Outline titled “We Are in an Information Crisis,” “but it has to begin with the largest technology companies in the world accepting that their algorithms really don't understand the value of information. More importantly, it means changing the way we think and learn about what information is, how you process it, and how you decide to spread it to other people.”
Tweaks to news-delivery algorithms like the ones Facebook has attempted and purges like Twitter’s are necessary parts of preventing social media from becoming overwhelmed by disinformation. But the bulk of the burden will rest on readers — and that’s a big problem. No offense.
It’s not that you’re not smart. (You’re very, very smart. Just look at you.) It’s just that the botsmiths are smart too. And one reason bots have been such excellent channels for actual fake news is their ability to slip through our filters. Tweets from bots don’t have to be compelling or memorable, they just have to be there — like signs for the speed limit. A single glance is all it takes to slow you down.
This leaves social media consumers with a few choices. Disengage altogether (not bad, but not likely), wing it and trust in common sense to prevail (*slowly turns head toward the White House*), or develop an even greater sensitivity to bot methods and tells.
I’m leaning toward that last option because it’s the only way, frankly, to save the Internet. To start, clear out bots from your follower base. Bot accounts look stranger than the strangers that invariably drift into your online friendzones. Scrutinize pic-less profiles and unfamiliar (or ubiquitous seeming) photos, and squint suspiciously at usernames that look auto-generated.
On Twitter, some shaggy metrics can help you determine if you’re reading human tweets, or more botsam and jetsom. Check the creation date of the account against the number of tweets. Even the thumbiest Kardashian won’t typically generate more than 40 or 50 tweets per days, but bots are more industrious. According to the Digital Forensics Research Lab, a daily tweet count above 72 enters suspicious territory.
Check, too, the voice of the tweet. According to a team of researchers at NYU, bots are generally more likely than people to retweet things, and that the “most common use of bots in the period of time we examined in Russian political Twitter was to share news headlines, although not necessarily links to the news stories.” This, they reason, can help skew search rankings.
Sloppy tweeting habits can also reveal the residues of buggy bots. Stray colons or text artifacts that look leftover from iffy automated editing processes can be clear giveaways, but scroll through a tweeter’s timeline to see if a personality emerges — or if their entire identity revolves around silently forwarding stories from three websites. If a certain turn of phrase (or off note) strikes some deja vu, you may have seen the same tweet packaged elsewhere. (The practice of “tweetdecking,” or selling massive amounts of retweets, is another problem the platform is struggling to address.)
But the best way to regulate your exposure to bots — which will only find ways to grow more insidious — is to manage your friends and followers. Keep your friends close, your enemies closer, and the bots talking to themselves.
Michael Andor Brodeur can be reached at firstname.lastname@example.org