Hotels are ranked on TripAdvisor, books and consumer goods on Amazon, restaurants and services on Yelp. The wisdom of the online crowd sways countless decisions on where to dine, what to read, and whether to buy.
But is the crowd really that wise? Apparently not always, according to a study published Thursday that showed consumers’ ratings of comments on a news website were easily swayed by the positive ratings posted by others.
The research, coauthored by a researcher at the Massachusetts Institute of Technology, provides a cautionary tale for the social-networked age: Grade inflation is polluting the Internet.
The study showed that while people were easily, if subconsciously, nudged into overinflated enthusiasm and approval, they were not similarly susceptible to negative influences. That suggests that review sites may be skewed toward overly favorable ratings.
“The case where the wisdom-of-the-crowd effects work well is where each person brings their own observation and knowledge, however imperfect and idiosyncratic,” said Christopher Chabris, an associate psychology professor at Union College who was not involved in the research. “What this shows is when you don’t have that independence and everyone sees the history of other people’s opinions, you can get big biases in the outcome.”
The findings published in the journal Science surprised even coauthor Sinan Aral of MIT’s Sloan School of Management, who studies how social influences can affect decision-making and purchases.
The experiment was conducted on an unidentified online news website, where comments are voted up or down based on how good other readers think they are. Aral and collaborators randomly selected 101,281 reader comments, and as soon as they were posted, the researchers rated some comments with one up vote and others with one down vote. Some were left alone.
They found that this minor intervention could lead to a snowball effect.
Comments given an initial positive vote were 30 percent more likely to receive a very high rating versus those that were left alone, and had final ratings that were 25 percent higher than average.
Negative votes, on the other hand, were quickly corrected by the wisdom of the crowd.
“It’s a bit worrisome,” said Aral, an associate professor of information technology and marketing. “If you think even beyond ratings, if this herding behavior, or the tendency toward bubbles is true, it has implications for housing prices, stock market prices, and we need future research in other settings.”
The power of social influence to skew people’s choices has been clear in offline environments. In one famous experiment conducted in the 1950s, people were asked to choose which of three lines was the same length as another. If other people in the room publicly made an obviously wrong choice, many participants went along.
Online ratings seem empowering; after all, the aggregated feelings of dozens, hundreds, or thousands of people aren’t subject to the same limitations of an expert review, which reflects a single person’s experience and may be influenced by conflicts of interest and other unknown sources of bias. But the new study is part of a growing body of research showing online ratings can also be skewed and could be vulnerable to manipulation by the companies or people being evaluated.
In a paper posted online late last year, Michael Luca, an economist and assistant professor at Harvard Business School, analyzed Yelp restaurant reviews and how they evolve over time.
Luca found that simply averaging all the reviews a restaurant receives might not be the most accurate way to rate the experience of going to the restaurant.
That’s partly because prior reviews appear to influence the reviews that come after them — the same sort of peer influence that was found in the new study. In Luca’s study, reviewer biases could lead to ratings that were off by as much as a quarter of a star, in either direction.
He developed a way to weight the reviews so the overall rating better reflected the diners’ experiences.
“Every type of information source comes with a whole host of problems. For reviews, it’s not that this is a deal-breaker that the problem exists,” Luca said. “It’s just that this is the first time people are starting to think about the problem because the system is new.”
It’s difficult to say how strongly the new findings would apply to other areas. Even within the news site the researchers used, certain topics were more subject to social influence than others: comments on stories about politics, culture, and business were subject to overinflated positive ratings, whereas comments on stories categorized as general news, economics, information technology, and fun were not vulnerable to the enthusiasm of the herd.
Aral said he now is working to understand the decision-making mechanisms behind the biased ratings he observed — and has experienced himself.
Earlier this summer, Aral was eating at a restaurant in New York and went to leave a review on Yelp.
He had intended to award a three-star rating but saw that the last review had rated the restaurant much higher and praised the prices and salad dressing. He chose four stars instead.