Tufts University and the Navy are partnering on a research project with a goal to develop robots that can think for themselves, making moral judgments and performing tasks according to how they weigh right, wrong, and the gray in between.
Researchers at the school’s Human-Robot Interaction Laboratory will explore programming techniques that could eventually give machines the ability to react appropriately in ethical binds, without people having to instruct them.
On the battlefield, the ability to reason could enable a robot programmed to transport medical supplies to abort its mission and treat a wounded soldier it encounters along the way. A drone sent to strike an enemy stronghold could hold its fire if it detects civilians in the area — or decide to shoot anyway if there is imminent danger to American troops.
In nonmilitary settings, a thinking robot ordered not to touch patients in a hospital could make an exception to assist an elderly man who had fallen down. It could even police the ethical behavior of humans in a workplace.
The prospect of such high-level artificial intelligence is both exciting and slightly creepy, acknowledged Matthias Scheutz, a computer science professor at Tufts who will serve as principal investigator on the project. But he contended that wiring robots with some kind of moral compass will be critical as machines play larger roles in everyday life.
“If you have a robot that works remotely on Mars, there’s not much of a concern there,” Scheutz said. “But if the robot interacts with people — and our society is based on the law and moral principles — then the robot has to be sensitive to those.”
The three- to five-year project will be funded by the Office of Naval Research and includes scientists from Brown University and Rensselaer Polytechnic Institute. The team does not expect to engineer robots that come close to approximating human thought in so little time, but hopes to get a feel for what might be possible. Could machines really navigate choppy ethical waters?
“We won’t have cut-and-dried answers in five years. Not a chance,” said Paul Bello, the program officer leading the robotics effort for the Navy. “But right now, we have nothing. You can think of it as something like physics before Newton.”
A first step in the project might be programming robots with rigid sets of rules that tell them how to act when presented with certain moral dilemmas. For instance, a humanoid robot designed to serve as a personal aide could also be given a list of commands it must reject, like harming another person. Placed in a tug-of-war between conflicting obligations — to obey its master and eschew violence — the robot would give pacifism higher priority and ignore an order to hurt someone.
Yet it is not hard to imagine more complex scenarios. What if the master’s life is threatened? What if her life is not in jeopardy but she is pushed to the ground by a purse snatcher? What if an apparent act of violence is actually a playful punch on the arm by a close friend?
It is impossible to anticipate all the situations a robot might encounter, Scheutz said, which is why the goal is to advance beyond preset rules to true cognition, and create a robot that can reason through tough problems and justify its actions.
“The robot has to be able to cope with circumstances that are not foreseeable by designers,” he said. “It’s a very hard problem. If I had all the answers, it wouldn’t be a research project.”