On the big screen, robots such as the Terminator are monsters, wreaking havoc on people and civilization. But in real life, is it possible that the robots are . . . the victims?
From the tech-friendly Bay Area to shopping malls in Japan to the mean streets of Philadelphia, robots are getting smacked around when they mix in with the general population. In June, the maker of a wheeled robot that delivers takeout food on corporate campuses in California and in Europe reported that humans occasionally kick its machines for no apparent reason, or flipped them on one side, like a toppled tortoise. In San Francisco last year, homeless people draped a tarp over a security robot that patrolled the parking lot of a nonprofit, tipped it over, and smeared barbecue sauce over its camera lenses.
Maybe it’s mere vandalism. Or maybe humans are striking out from some deeper anxiety.
“Robots that look and act like people are a threat to human identity,” said Karl MacDorman, associate professor of human-computer interaction at Indiana University. “A robot that could serve as a stand-in for a person could threaten your sense of self. It does this by diminishing what it means to be human.”
It’s hardly a global crime wave. Today, most robots are found on factory floors, or they’re vacuuming floors in our homes. But artificially intelligent machines are quickly getting more mobile and more capable. The proliferation of self-driving cars, delivery droids, robot security guards, and the like will put them in daily contact with millions of people, exponentially increasing the opportunities for some humans to act out their frustration, anger, or mischief-making.
Google has reported that two of the company’s self-driving cars have been assaulted by angry humans. Knightscope, the maker of the parking lot security robot, which looks like it could be vaguely related to R2-D2, said that one of its models was assaulted in April by a drunk man in Mountain View, Calif.
Then there was Hitchbot, a talking robot made by Canadian researchers for a project to hitchhike around two continents. Hitchbot traveled successfully through Canada and Germany, only to be destroyed by vandals as it waited for a ride in Philadelphia.
As in any criminal investigation, finding a motive is the first step.
“In some cases, it may be simple vandalism out of boredom,” said Kate Darling, a research specialist at the Massachusetts Institute of Technology Media Lab who studies the ethical issues raised by human-robot interactions. Just as some people spray-paint graffiti on walls merely because they can, others may smash robots just because they are there. For these attackers, it’s nothing personal.
“In other cases,” Darling said, “people may be annoyed at how the robot is being used.”
For example, the homeless people in San Francisco probably resented being constantly monitored by a rolling video camera, Darling suggested, while sandwich delivery droids could have been kicked because people objected to sharing crowded sidewalks with a robot.
Psychologist Tom Guarriello said that many people see smart machines as a threat to their livelihoods. A marketing instructor at the School of Visual Arts in New York and the host of RoboPsych, a podcast on human-robot interaction, Guarriello cited recent estimates that robots and computers could replace nearly half of the US workforce.
“Being afraid of losing one’s job to a robot is not going to endear you to the technology,” he said. “It’s going to make people feel nervous, and some of the anxiety caused by those numbers is going to express itself through violence.”
In a bid to fend off future attacks, some robot specialists are studying ways to persuade humans to help machines that can’t defend themselves.
Xiang Zhi Tan , a doctoral student at the Robotics Institution at Carnegie Mellon University, based his research on anti-bullying programs that encourage people to intervene when they see other people under attack.
So far, Tan’s findings don’t provide much cause for optimism. In a test where 48 people saw someone abusing a robot, 45 of them offered assistance, but only after the assault had ended and the machine was lying on its side. Some participants said they felt bad for the robot — but none felt bad enough to put a stop to the abuse.
Scientists at Osaka University in Japan may have found a way to protect robots — from children, at least. In Japan, there had been instances of small children punching and kicking robots in public places, such as shopping malls. So the Osaka scientists wrote software for robots to distinguish children from adults by their height. When a child approached, the robot would move closer to the nearest adult. Sure enough, children were less likely to attack with a grown-up hovering.
A 2015 research paper by Darling at MIT suggests that psychology, not software, might better protect robots. In her experiment, Darling asked about 100 volunteers to hit small robots with hammers. But first, some volunteers were told a story about the robot — that its name was Frank, that it liked to play, and that its favorite color was red. Those volunteers were much more hesitant to hit the machine.
Darling suggested that providing robots with a sympathetic biography humanizes them, generating feelings of empathy that will make it less likely that people will attack.
It doesn’t seem like someone would harm a robot that reminded them of C-3PO from “Star Wars.” If smart marketers introduced the machines we see on the streets with appealing tales — they’re brave or kind or have a goofy sense of humor — maybe more of us would give them a friendly wave instead of a nasty kick.
It might work. After all, even Terminator T-800 stopped killing people — once he got to know a few.Hiawatha Bray can be reached at firstname.lastname@example.org. Follow him on Twitter @GlobeTechLab.