If you don’t spend much of your valuable earth-time gazing into Reddit, Facebook, Twitter, 4chan, or any of the Internet’s other habitats for social media trolls, lucky you. For the rest of us, trolling remains a stubborn reality of Internet life.
Broadly speaking, a “troll” manifests as a more aggressive strain of the “hater,” a typically anonymous figure posting objectively offensive or abusive content, often repeatedly, and often targeting other users by goading them into defensive responses. Whether the troll actually believes in the purported premise of the trolling is almost beside the point. The act of trolling trumps any ideological end game it serves: The noun, at heart, is the verb.
And while any one of us can, during a lapse in preferred net etiquette, post a snipey comment here or a snippy riposte there, true trolling is characterized by both the relentlessness of trolls and the faceless forms they assume. Fans of “The Walking Dead” or “Game of Thrones” have fresh visuals to recruit in imagining the nature of trolls: a ravenous mob of human-like creatures, emerging from the darkest depths, blindly lurching forth, snarling, snatching, and scratching at the living. Put up a wall, they’ll push through or climb over it; cut off their arms, and they’ll reach with their teeth. Their appetite for distraction is insatiable.
The recent #Gamergate episode (which found widespread anti-feminist trolling wielded against women in the gaming community) helped shift cultural awareness of the ever-burgeoning troll epidemic into the mainstream. Any high-profile news event will summon trolls from the woodwork. But more and more signs of a general cultural backlash are starting to show.
In the United Kingdom, criminal convictions of trolls have increased nearly eightfold over the last decade. In Sweden, one reality TV show presents journalist Robert Ascher as the “Troll Hunter,” tracking down Internet bigots and pulling them out from behind their screens, some of whom have made trolling into a full-time preoccupation within their full-time occupations. And the outing of high-activity trolls has become an intermittent feature of the news cycle, from Gawker’s 2012 unmasking of infamous Reddit troll Violentacrez to Sky News’s outing of 63-year-old Brenda Leyland (a.k.a. @sweepyface).(Leyland committed suicide in a hotel room days after the story ran.)
The promise of public penance and a semblance of social media justice make the shaming of trolls prime fodder for reality TV, but while it could serve to dissuade potential trolls, it does nothing to disable them.
To that end, sites like Reddit and Twitter have started taking bolder steps toward impeding trolls. A recent survey of more than 15,000 “Redditors” revealed that 50 percent of those who wouldn’t recommend the site cited hateful or offensive content and community as the reason, with female users registering twice the dissatisfaction of other users. Reddit has responded to this dissatisfaction by making moves to “promote ideas, protect people.”
This has amounted to the announcement (after 10 years) of an anti-harassment policy last month, and the deletion of at least five subreddits (communities within Reddit) determined to use the site as a platform for harassment. One of the few printable forum names among them is r/fatpeoplehate. These five represent but a fraction of the subforums that could arguably be categorized as offensive or outright abusive (one particularly racist subreddit still has more than 10,000 subscribers, roughly twice what r/fatpeoplehate accumulated).
As with trolls, trolling communities, once disbanded, have little difficulty whipping up new online identities and trolling forth, but the removal of abusive subreddits at least signals a concern among Reddit’s overseers with the real-world ramifications of allowing harassment to flourish in the name of free speech.
Twitter, too, has taken recent steps to combat trolls without entering the dicey game of controlling content. The platform recently widened the language of its policy to cover more types of abuse, and moved to block repeat offenders by requiring phone numbers (a stronger bit of verification than an e-mail, and slightly harder to obtain). On Wednesday, Twitter also introduced a function that would allow users to share block lists, so known trolls can be blocked en masse before having a chance to strike.
Meanwhile, researchers at the forefront of troll prevention are devising ways to identify “future banned users” by analyzing trollish posts. They found that five to 10 posts were enough to make determinations with 80 percent accuracy as to which users would end up banned for misbehavior.
But despite all the work underway to neutralize the threat of Internet trolls, the tactical value of trolling is finding its way out of the basement and into the halls of power. A recent Times story from Adrian Chen (who wrote the original Gawker piece that outed Violentacrez) examines the industrialized “troll farms” that serve Russian government interests. Chen revealed Russia’s shady Internet Research Agency to be a multilayered, internationally focused misinformation operation. One former operative of the agency is now suing it for moral damages.
And even within our own government, we’ve seen examples of trolling techniques in action (see: Senator Tom Cotton’s recent trolling of Iranian Foreign Minister Mohammad Javad Zarif for all to see on Twitter). Such high-profile low-blows have even earned their own designation: diplo-trolling. This mingling of troublesome Internet tactics with sensitive foreign policy issues not only threatens the integrity of international relationships, it legitimizes trolling as a form of public discourse. Not helpful.
Blockades, barriers, walls, and moats can only do so much to keep the trolling hordes at bay; we know this. But as with the trolls, the effort we put into pushing back ultimately means more than the outcome we end up with.