Security systems aren’t designed to be fun. Whether they involve turning a key, flashing an ID card, or typing a convoluted password, proving who we are again and again is one of modern life’s chores. But two Boston University engineering professors, Janusz Konrad and Prakash Ishwar, think they have a better way. Along with colleagues at New York University, they’re developing gesture-based recognition software that might allow us to dance, mime, and shadow-puppet our way into offices, labs, cars, and personal devices.
Their approach depends on the insight that your movements, whether they’re gawky or suave, are as identifying as your handwriting or fingerprints. Your physical features help distinguish you, but even more, so does the unique cadence of your movements. “The dynamics are what matters,” says Konrad, head of BU’s Visual Processing Laboratory, and these are surprisingly personalized. He found it impossible to copy the gesture of a student—not only because Konrad was taller, but because the younger student was more flexible and had different timing.
The professors’ system builds on motion-sensing devices like Microsoft’s Kinect. But where Kinect games aim to determine when you’re waving your hand or jumping so an avatar can follow suit, their software uses the technology for a different purpose: to identify the unique way you do it. “We’re not trying to recognize gestures per se; we’re trying to recognize users making the gestures,” says Ishwar, associate professor in BU’s Center for Information and Systems Engineering.
The approach shares some advantages of biometric security systems like facial or fingerprint recognition that rely on basic physical features that are hard to imitate. But including a gesture adds an adjustable component as well. If someone manages to steal or copy your gesture, you can always change it. It also gets around some problems of facial recognition: Software that recognizes your face can feel intrusive, but performing a bow or a jig for a camera is a voluntary act—and one that can’t be matched to you unless you’re doing it.
To establish your unique gesture password, the researchers use a Kinect camera to create a simple “skeleton” stick figure of your particular body (the software could be used with other motion-sensing cameras as well). Then, as you gesture, the coordinates of each point on the skeleton shift in a pattern over time. As you perform the same gesture a few times, the software learns the unique dynamics of your movement and the slight fluctuations in how you perform it. Once it has these down, only your body and rhythm are sufficient to crack the code.
We’re still a long way off from doing the Electric Slide through airport security. With funding from the National Science Foundation, the team is in the early stages of developing algorithms and studying the feasibility of the approach. The challenge is allowing for enough variability in how you perform the gesture, while also ensuring that your gestural signature is safe from “forgers”—in this case, people who would attempt to cop your moves.
You can watch a video of Konrad and Ishwar explaining their system here. Courtney Humphries is a freelance writer in Boston.