There’s a lot of nonsense on the Internet — democracy-hobbling, public health-destroying, Ben and JLo-defaming nonsense.
Trouble is, there aren’t enough professional fact-checkers to debunk it all.
But a new study by MIT researchers, appearing in the journal Science Advances, suggests there may be a way to scale up the kind of work they do: Crowdsource it.
It turns out that a small group of laypeople tends to come to the same conclusions about the veracity of an online story as the pros.
Here’s how the study worked.
The researchers started with 207 stories that a Facebook algorithm identified as in need of fact-checks — either because there was something suspect about them or because they had gone viral or were about an important topic like health.
Then they had 1,128 US residents on Amazon’s Mechanical Turk platform, where people can be hired to perform online tasks for pennies at a time, review 20 of the articles by reading just the headline and the lead sentence.
The reviewers were asked to categorize each article as “true,” “misleading,” or “false” or to say they weren’t sure. They were also asked to provide quick, finer-grained responses to seven questions — judging the extent to which each article “1) described an event that actually happened, 2) was true, 3) was accurate, 4) was reliable, 5) was trustworthy, 6) was objective, and 7) was written in an unbiased way.” The answers were combined into an overall accuracy score for each article.
Meanwhile, a group of three professional fact-checkers reviewed all 207 articles — reading the full stories and conducting research on each.
By some measures, the professional fact-checkers were in considerable agreement with one another. For over 90 percent of the articles, at least two of the three agreed on a categorical rating (”true,” “misleading,” “false”).
But the study found real variance, nonetheless. All three pros agreed on the same rating only half the time — showing that truth, as the paper put it, “is often not a simple black-and-white classification problem.”
With that in mind, the researchers used the variation among the professionals as a benchmark for evaluating the laypeople’s judgments. And they found that even when pooling a relatively small number of laypeople’s evaluations, the correlation between laypeople’s and fact-checkers’ evaluations was about the same as the correlation among the fact-checkers’.
In other words, if you combine the judgments of a handful of laypeople reading only a headline and the lead sentence of a story, you’ll get something close to the judgment of a professional fact-checker conducting an in-depth study.
Researchers estimate that if social media companies paid laypeople to vet articles, it would cost about 90 cents per story. And stories deemed untrustworthy could be downranked, or demoted on the website — an approach that has been shown to dramatically reduce sharing.
Some of the largest social media companies are already experimenting with crowdsourcing. Two years ago, Facebook launched a pilot called “Community Review” that paid laypeople to flag questionable stories for third-party fact-checkers to examine. The company worked with data and opinion firm YouGov to ensure that the reviewers reflected the diverse viewpoints — including political ideologies — of Facebook users.
The challenge in scaling up this kind of approach — and giving lay reviewers fuller power to judge and downrank stories — is that participation might have to be thrown open to a broader public. With thousands upon thousands of articles to be reviewed by 10 or 12 people each, a lot of manpower would be required. And opening up the process could give partisans a chance to game the system. Conservative or liberal activists could organize supporters to review stories en masse and deem favored pieces of propaganda true.
The social media giants would have to figure a way around this problem — or risk spawning more nonsense on the Internet.