Read as much as you want on BostonGlobe.com, anywhere and anytime, for just 99¢.

Q&A

I, Robot, know what I know

Jeff J Mitchell/Getty Images

The Shadow Robot company’s dextrous hand robot holds an Apple at the Streetwise Robots event held at the Science Museum’s Dana Centre on May 6, 2008 in London, England.

Human beings are introspective creatures; we know what we know and we know what we don’t know.

We do not expect the same of our machines. But should we? Would our cellphones, gadgets, and robots benefit from being cognizant of the limitations of their knowledge? Would they work better or be less annoying?

Continue reading below

To MIT computer scientist Leslie Pack Kaelbling, this is not a philosophical question. It’s a real problem in robotics and artificial intelligence. Before robots can be deployed in the sorts of uncontrolled environments that exist outside of factory settings or be upgraded beyond simple tasks such as cleaning a pool, they need to be able to evaluate their own uncertainty.

Can the robot reach a particular object without knocking everything else off the shelf? Does it need to take a closer look? Should it ask a question to figure out what task to start doing? Kaelbling just coauthored a paper to be published in the International Journal of Robotics describing how to integrate calculations of uncertainty about the present and the future into robots’ programming.

Kaelbling answered questions about her work by e-mail.

Q: We don’t usually think about machines as being uncertain, and we don’t often think of uncertainty as an ability. Can you explain the system you’ve designed and why it might be beneficial for uncertainty to be part of robots’ programming?

A: Uncertainty itself is not an ability. ... It’s an inevitable state. In any interesting domain, no robot can be without uncertainty. The important ability is an awareness of one’s own uncertainty. So, for instance, if the robot is not sure enough about the position of an object it wants to pick up, it can decide to look at it from another angle to localize it better. If it is not sure enough about where to find the drink you asked for in the refrigerator, it will either have to move objects in the refrigerator to try to find it or to ask you where to look.

Q: If a robot is programmed to know it can be wrong about something or has limitations in its abilities, will that run the risk of having the opposite effect -- robots that become paralyzed and unable to accomplish tasks in situations in which they don’t have complete knowledge?

A: The principle of rationality applies here, as well. The robot should take the actions that will have the best outcome in expectation, no matter how uncertain the robot is. Actions might gain information for the robot, or change the state of the environment, or both. Of course, there are situations in which the rational choice is to not do anything: that will only be true when the expected utility of inaction is higher than that of any of the other action choices.

Q: In what types of situations do you see applications for robots capable of this type of reasoning?

A: Classical factory robots don’t need this kind of reasoning, because their environment is carefully engineered so there is no variability. Very simple robots, like vacuum cleaners, don’t need this kind of reasoning, because they execute the same simple strategy no matter what environment they are in. Particular domains where reasoning about uncertainty is important include: household robots (for elderly or disabled or regular! people), very flexible manufacturing robots that change tasks frequently and work in informal environments, disaster relief robots.

Q: Children often seem to go through stages where they learn the limitations of their own understanding and abilities. Do you look to developmental psychology at all for hints on how robots should learn these behaviors?

A: No. That’s not because we think that developmental psychology is not an interesting scientific endeavor. We think that there might be other paths to intelligence besides one that mimics humans, and that those paths might actually be easier for human engineers to follow.

Carolyn Y. Johnson can be reached at cjohnson@globe.com. Follow her on Twitter @carolynyjohnson.
Loading comments...
Subscriber Log In

You have reached the limit of 5 free articles in a month

Stay informed with unlimited access to Boston’s trusted news source.

  • High-quality journalism from the region’s largest newsroom
  • Convenient access across all of your devices
  • Today’s Headlines daily newsletter
  • Subscriber-only access to exclusive offers, events, contests, eBooks, and more
  • Less than 25¢ a week
Marketing image of BostonGlobe.com
Marketing image of BostonGlobe.com