fb-pixel Skip to main content

MIT researchers are working on this superpower: seeing around corners

If someone says he or she can see around corners, up to now it’s been nothing but a metaphor. Now MIT researchers say the superpower is getting closer to reality.

The researchers have developed a simple algorithm that processes video and converts it to a one-dimensional image of what’s going on around the corner, said Katie Bouman, a leading member of the team who received her PhD in electrical engineering and computer science from the school this year.

Imagine you’re standing up against a wall in front of a corner. You can’t see what’s happening around the corner, but you can see a fuzzy shadow at the base of the corner where the two walls meet.

Advertisement



You point your video camera at the shadow and run the video through your computer. The algorithm translates the information from the shadow into a one-dimensional map of the people who are moving in the room.

Bouman said the algorithm takes advantage of the fact that “the reflected light you see on the floor is related to the hidden scene in a very special way.”

As people walk around the room, they are blocking a tiny portion of the light, changing the intensity, Bouman said.

The algorithm takes the video images and “smushes” them into a one-dimensional map with lines on it that represent the activity in the room.

By tracking the movement of the lines, Bouman and her team can tell how many people are moving around the corner, the pattern of their motion, how fast they’re going, and more.

“Anybody can just download the software and take a video on a tripod, as long as it’s stable, at the base of a corner, and run this code to see one-dimensionally what’s around the corner,” said Vickie Ye, another member of the team and a master’s degree student at MIT.

Advertisement



“People think of this as a superpower, but I think there’s a lot of real applications it can have,” Bouman said.

For instance, while the algorithm only works on a steady camera for now, Ye and MIT researcher Felix Naser are working to apply it to moving cameras so it can be used in more scenarios.

The researchers envision their technology being used someday in hostage and barricade situations and search and rescue missions. On self-driving cars, it might be able to help avert tragedy when a child runs out into the street.

“I think it would be really nice, if you had someone . . . about to dart out, to be able to predict that in advance,” Bouman said. “When you first think about this, you might think it’s crazy or impossible, but we’ve shown that it’s not if you can understand the physics of how light propagates.”


Alyssa Meyers can be reached at alyssa.meyers@globe.com. Follow her on Twitter @ameyers_.