fb-pixel Skip to main content

Making artificial intelligence more human

MIT center, backed by $25m federal grant, learning from infant brain research

CAMBRIDGE — The bold quest to build intelligent machines has, after more than half a century, brought us to this point: Scientists can build a “Jeopardy!” champion, but a child can handily outperform a computer when it comes to deciphering social situations, learning, or pretty much any activity outside the machine’s narrow band of expertise.

To change that, a group of leading infant researchers, neurobiologists, computer scientists, and robotics and software companies are joining forces in a major effort to finally achieve and even expand the grandiose ambitions of artificial intelligence, supported by a $25 million grant from the National Science Foundation.

Advertisement



At a new center based at the Massachusetts Institute of Technology, researchers will seek to craft intelligence that includes not just knowledge but also an infant’s ability to intuit basic concepts of psychology or physics. Answering cleverly posed trivia questions is impressive, but perhaps more so is the ability to make sense of observations — answering seemingly simple inquiries such as “what is the object closest to the window?” or “what is the woman looking at?”

“I think this is the greatest problem in science and technology, greater than the origin of the universe or the origin of life or the nature of matter, partly because it’s a problem about who we are,” said Tomaso Poggio, the director of the new Center for Brains, Minds, and Machines, and a professor at MIT. “It’s a problem about the very tool you use to solve all other problems: your brain.”

The center will draw together 20 faculty from MIT, Harvard University, and other major universities, as well as business partners that includes Google, Microsoft, and local robotics companies Boston Dynamics and Rethink Robotics.

Recognizing the importance of advances in technology and science, President Obama announced a major national effort this year to map the activity of the brain. The ambition to build intelligent machines, scientists argue, is at a pivotal moment for similar reasons, fueled not just by advances in computers and robotics but by progress in the ability to probe the brain circuits that underlie specific behaviors and by greater understanding of how intelligence develops in the infant brain.

Advertisement



The past few decades have seen an enormous flood of information about the infant and child mind, overturning ideas about what babies know and how they learn. For example, developmental psychologists have delineated precisely the way children begin to grasp how the rules of gravity apply to objects over the first year of life.

“In the early days, understanding intelligence had to do with reasoning and problem-solving,” said Patrick Winston, a professor of engineering at MIT and the center’s research coordinator. “On the science side, we’ve moved away from mathematical reasoning to common sense.”

For Winston, what makes human intelligence most stand apart from machines — and from the rest of the animal world — is our ability to tell and comprehend stories. Finding sense in a fairy tale may seem trivial, but Winston has been working for years on something not far from that: a computer program called the Genesis project that can be fed a block of text and do something approximating what we do when we read a story. Given a quick summary of Shakespeare’s “Macbeth” or a story about a conflict between two countries, the program can try to draw causal links and figure out why things happened and what it all means. It can detect concepts such as revenge and assess people’s character.

Advertisement



At his office in the Stata Center, Winston showed the program at work. Boxes popped up on the screen and rearranged themselves as the program essentially diagrammed sentences and paragraphs. At this point, Genesis can easily be broken by a story it does not comprehend, but the goal is to work up to a library of more than 100 stories it can correctly parse within a year. In demo mode, it was sensitive to slight alterations in meaning, detecting that a politician “forcing” a country to move toward democracy was different than “asking” for the same outcome.

The machine has been taught concepts, such as “the enemy of my enemy is my friend.” These lessons can seem almost humorous when written in the explanatory language for the computer: “XX is an entity. YY is an entity. XX harms YY. . . . XX’s harming YY leads to YY’s harming XX.” But in some ways, that lesson is not all that different from how people acquire many of those concepts. People, too, have to be told what a Pyrrhic victory is.

Across the street, at a building teeming with neurobiologists and cognitive scientists, Joshua Tenenbaum, another member of the center, is taking a slightly different tack in the effort to understand intelligence: trying to build a child’s mind.

“Let’s try to reverse-engineer the early stages of cognition. What do young babies know?” Tenenbaum said. “Even young babies, 3 or 4 years old, are more intelligent than any machine has ever been. Let’s build a road map of cognitive development over the first three years of life, but let’s build it in engineering terms — the same terms I would use to build a self-driving car.”

Advertisement



The reason such an effort now seems plausible, Tenenbaum said, is that engineers and developmental psychologists are finally using the same powerful type of math, whether they are working to design programs or describe the developing mind.

Progress has already been made in artificial intelligence — tremendously, in some task-based areas. Poggio, the center director, said that when he started in the field three decades ago, he worked on developing a camera system that could detect pedestrians. A type of computer vision that made about 10 mistakes a second was considered a major feat. Now, he said, computer vision systems developed for driving applications make mistakes once every 30,000 hours of driving.

It’s even possible to see computers overtaking some human abilities. Researcher Joel Leibo has been working on a vision system that recognizes faces at a glance, even when given the difficult challenge of matching people’s faces shown from different angles. The system is getting more sophisticated; it did only slightly worse than a research assistant who tried the same face-matching task.

“There are systems that are doing certain things that are really difficult to do — and difficult for humans to be able to do,” Poggio said. “But none of these systems are really intelligent; you cannot have a conversation with these systems.”

Advertisement



These researchers want to build a different kind of intelligence. Imagine a cafeteria at MIT at lunchtime. People are doing what people do daily: talking, eating, arguing, sipping a drink through a straw. Researchers want to build intelligence smart enough to take in the scene and describe, in words, exactly what’s happening.


Carolyn Y. Johnson can be reached at cjohnson@globe.com. Follow her on Twitter @carolynyjohnson.