Be careful when you try to correct false information posted on Twitter. You might just make matters worse, according to new research from the Massachusetts Institute of Technology.
A team of scholars at the MIT Sloan School of Management found that Twitter users often double down on their bad news habits after being corrected.
The study looked at 2,000 users who had shared misleading information, and it found that those who were told of the falsehoods became more likely to retweet stories from untrustworthy or politically extreme news sources, and more likely to retweet stories containing harsh or toxic language.
“It made the effect of their sharing worse, in numerous dimensions,” said David Rand, one of the MIT Sloan researchers.
“Being publicly corrected by another person makes them less attentive to what they retweet,” Rand added, “because it shifts their attention not to accuracy but toward social things like being embarrassed.”
Rand and his colleagues began with 11 news stories that had been debunked as false by the fact-checking website Snopes.com. Next, they found 2,000 Twitter users who had tweeted the false stories.
About 75 percent of the users who’d transmitted false stories were conservatives, according to an analysis by the researchers, in keeping with a 2019 Northeastern University study that found conservatives are more likely than liberals to share fake news online.
Next, the researchers created automated Twitter bot accounts disguised with icons that suggested the account holders were real people. They used these accounts to send messages to the Twitter users who’d shared the false stories, informing them that the reports were untrue.
“I’m uncertain about this article,” said one such message. “It might not be true. I found a link on Snopes that says this headline is false.”
Then the scientists monitored news stories tweeted by these users over the ensuing 24 hours, and tracked the source of each story.
Rand and his colleagues compared the stories to a list of 60 major news sites that had been evaluated by 2,000 fact-checkers for bias and accuracy. Twenty of the news sites were run by traditional mainstream news organizations, 20 others were highly partisan left- or right-wing news sites, and 20 more had a reputation for routinely publishing false stories.
Rand’s team found that Twitter users who’d been chastised for posting a false story were more likely to retweet news stories from extremely partisan or less trustworthy sites.
Curiously, the negative effect applied only to retweets. The users’ original tweets were no more likely than before to come from a low-quality news site. Rand said this is probably because people are more attentive when writing their own Twitter messages than when retweeting someone else’s.
Rand said the study shows that controlling the online dissemination of extremist views and inaccurate information is a complex task with no easy solutions. It’s still a good idea to push back against false news stories, he said, but he warned of a backlash if the criticism seems too personal.
“If you’re going to correct people,” he said, “really try to focus on accuracy and don’t try to make them feel defensive.”