fb-pixelWhy we need to learn to trust robots - The Boston Globe Skip to main content

Why we need to learn to trust robots

A new study shows we prefer human guides even when algorithms work better. That’s a problem, say researchers.

iStock

When it comes time to choose a bottle of wine in a posh American restaurant, you may look around for the sommelier. But ask for help making your selection at a restaurant in the Netherlands, and you could be handed a tablet.

There, you can enter your main dish into a program called WineStein and receive a recommendation from the restaurant’s inventory, based on an analysis of the meal’s gastronomic properties. The software, now at about 100 restaurants, supermarkets, and wine shops in Holland, has recently made its debut in the United States and can be downloaded as an app. “People love it,” says Cor Balfoort, WineStein’s scientific director.

Would you try out a robot sommelier? Sure, why not? But then here’s a harder question: If the first bottle it recommends isn’t great, would you give it a second chance?

In the rest of life, as in restaurants, we’re increasingly dependent upon algorithms. We use them for investing in stocks, selecting baseball players, predicting the weather, and deciding which movie to watch next. Soon, IBM’s Watson will be diagnosing our illnesses, and Google’s tech will be driving our cars.

Advertisement



Sometimes we put faith in these robot guides, as when we mindlessly follow directions from the GPS. But often, experts say, we actually don’t trust them as much as we should. Specifically, a new study finds, people are reluctant to give computers a second chance when they mess up, even if it’s clear that the machines have a higher hit rate overall. Given that in many situations their “judgment” is superior to our own, that’s a problem. If algorithms are really going to live up to their potential to help us, some scientists are arguing, we’re going to need to figure out how to put more confidence in their results.

In a paper called “Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err,” forthcoming in the Journal of Experimental Psychology: General, the University of Pennsylvania researchers Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey asked subjects to consider the challenge of making a difficult forecast: predicting either MBA students’ academic success or states’ airline traffic. They could choose to tie what they earned to either the prediction accuracy of a human (usually themselves) or that of a statistical model. Before making their decision, they first saw the model’s performance on several trial runs, or saw the human’s performance, or both, or neither.

Advertisement



When they had not seen the statistical model perform on the trial runs, the majority of subjects bet on the model being more accurate in the money-earning round—they chose to tie their earnings to its performance rather than the human’s. But if they had seen it perform, the majority bet on the human. That’s despite the fact that the model outperformed the humans in every comparison, by margins ranging from 13 to 97 percent. Even when people saw the performance of both the model and the human in the trial runs, and saw that the model did better, they still tended to tie their winnings in the earnings round to the human over the model. They were more accepting of the human’s mistakes.

These findings surprised the researchers. They had expected people to shy from algorithms at first and then change their minds after seeing their superior performance, Dietvorst says. Instead, he says, they “found completely the opposite.”

It’s not the first time research has shown that people making decisions tend to dismiss algorithms. A 2006 paper reported that physicians’ recommendations were more likely to be followed than those from a computer. A 2009 paper reported that stock forecasts purportedly from human experts swayed people’s price estimates more than statistical forecasts did. And a 2012 paper reported that legal and medical decisions made by a human were assumed to be more accurate and ethical than those made by a computer.

Advertisement



Balfoort speculates that people distrust an algorithm “if it touches either your health or your ego”: Wine experts, he says, have warmed more slowly to WineStein. In some cases, the ego might be humanity’s collective self-regard. Dietvorst suspects we distrust algorithms in domains where we think “human judgment captures something that a computer couldn’t,” such as school admissions.

There can be a real cost to this aversion. A 2000 meta-analysis summarized 136 studies comparing predictions made by experts with those made by equations, in areas including medical prognosis, academic performance, business success, and criminal behavior. Mechanical predictions beat clinical predictions about half the time, while humans outperformed equations in only 6 percent of cases. Those are judgments with significant implications for our lives, and it’s a genuine loss to ignore a system that can give us much more accurate answers.

The researchers at Penn are now exploring why people abandon algorithms so quickly. There may be a hint in the fact that subjects judged the statistical model as worse than humans at learning from mistakes and getting better with practice. Perhaps people could learn to trust algorithms more if they were told that computers can learn. Balfoort says that once you inform customers that WineStein gets better with feedback, “you get a satisfied nod.”

Advertisement



In current work, Dietvorst is finding that giving people the chance to alter an algorithm’s forecast even a trivial amount increases their adoption of it over a human counterpart, from less than 50 percent to more than 70 percent. People “don’t want to completely surrender control,” he says. Ironically, the user’s adjustment usually makes the computer forecast slightly worse, but at least it gets them to use it.

Holly Yanco, a roboticist at the University of Massachusetts Lowell, is also finding that a little human mediation can make a difference. In surveys, participants imagine IBM’s Watson suggesting a personalized cancer treatment. They accept the system’s use—as long as a doctor supervises its proposals, she says. “We’re looking for people to provide that last sanity check.”

Then there’s the option of just making computers seem a tiny bit more similar to people. The psychologist Adam Waytz reported last year that people trust autonomous vehicles more when the car has been given a name and a gender, anthropomorphizing it. Yanco has found that if a robot communicates uncertainty before making a mistake, as a human would, people don’t lose trust in it. “It’s not awful that the system can’t do everything perfectly,” she says, “but it’s awful if we think it’s supposed to be doing everything perfectly and then it doesn’t.”

What will eventually help our robot trust issues most of all, however, is getting the algorithms closer to that kind of perfect reliability—and becoming accustomed to them as a part of everyday life. Trusting a new friend, after all, even an electronic one, is often just a matter of time.

Advertisement



Matthew Hutson is a science writer and the author of “The 7 Laws of Magical Thinking.”

Related:

My day as a robot

Can a robot be too nice?

The botmaker who sees through the Internet

2013: Should we put robots on trial?

2013: Will a robot take your kid’s job?