If you frequented Kendall Square in the 1990s, you may have encountered one of the pioneers of wearable computing, students who ambled around Cambridge wearing special goggles with built-in cameras and display screens, toted computers in backpacks and messenger bags, and palmed special one-handed keypads so they could enter data. Sprouting wires everywhere, they looked like cyborgs late for a Halloween party.
Thad Starner was part of the bunch, who called themselves the “borgs.”
“It was clear to me that this was going to be a lifestyle that was compelling,” says Starner, who began wearing a computer and display regularly in 1993. “Wherever I was, I could pull up local maps. I would learn stuff from having hallway conversations with other researchers, and I had a system that let me take notes to remember what they said.” The rest of us, however, just weren’t ready to don computers.
But Starner and four other MIT alumni who were part of the first wave of wearable computing have reunited in Silicon Valley, where they now work for Google. The tech behemoth has recently begun promoting a prototype device called Google Glass, eyewear that puts relevant information into your field of vision, rather than on a smartphone in your hand.
Google Glass, which includes a transparent digital display extending out in front of one eye and a touchable control panel on the temple, could finally push wearable computing into the mainstream. But many questions about adding an information layer to the world we see around us remain unanswered, more than 15 years after students and entrepreneurs in Massachusetts tried to jump-start the field.
‘It was clear to me that this was going to be a lifestyle that was compelling.’
In the days before Wi-Fi and 3G, MIT student Steve Mann created a wearable webcam that could snap photos and send them over amateur TV frequencies to a computer that posted them on a Web page. Other student projects could watch a deaf person communicate in sign language and translate the gestures into synthesized speech, or look at balls arrayed on a billiard table and suggest the best possible shot.
Rich DeVaul, now at Google, experimented with flashing subliminal messages on a glasses-based display to remind wearers about shopping items, their next meetings, or the name of someone standing before them. (DeVaul optimistically predicted that his “memory glasses” would be on the market by 2005, and sell for $300.)
One of the earliest local companies to develop wearable technology was MicroOptical Corp., founded in 1995. Initially, the company landed military contracts to develop head-mounted displays for soldiers, so that they could see updated information about troop movements overlaid on a map, for instance.
Later, the company sold tiny $2,500 screens that could be clipped onto a pair of glasses. Anesthesiologists could use them to monitor a patient’s vital signs during surgery, as they moved around the operating room.
But MicroOptical had two problems, according to founder Mark Spitzer. First, there weren’t yet smartphones or lightweight tablets for the company’s displays to plug into. And second, the company was targeting small industry niches where people might use its product.
“Even when we’d get traction in one area, we’d have to spend more on marketing to reach people in the next niche,” says Spitzer.
The company later shifted to a thin pair of RoboCop-style glasses that could plug into an iPod, so users could watch videos privately on a larger screen. But the company didn’t survive the last recession and its assets were sold off.
Many of the translation, navigation, and information-retrieval tasks that the wearable pioneers worked on were subsumed into the smartphone and its universe of apps. “Instead of wearing technology on your body, Apple took all this technology and fit it into these hard, square boxes you hold in your hand,” says Maggie Orth, an MIT alumna who worked on integrating computers into clothing.
In 2010 and 2011, Google began hiring some of the MIT veterans to work on Google Glass. They were true believers, and were never convinced that smartphones represented the endpoint for personal technology.
“When you want to take a picture of that funny moment with your new baby,” Starner says, “you have to get the phone out of your pocket, turn it on, type in your passcode, find the camera app, boot it up, and click. With Glass, that’s a single button-push. It allows you to stay heads-up and in the moment more than a cellphone can ever do.”
Google will begin shipping $1,500 prototype devices next year, but at first only to software developers who want to build applications for them. A big challenge lies ahead, however. Will consumers like — or at least tolerate — the way they look while wearing Google Glass?
“People are very vain, and so the reason for having the device on needs to be really compelling,” says Paul Zavracky, former president of MicroOptical.
Pattie Maes, an MIT professor who supervised some of the early research on wearables, observes that they will change our interactions. If someone wearing connected specs remembers your kids’ names and that you enjoy playing tennis, was that because they genuinely care, or because they got a computer assist?
Maes recalled meetings with a student employing wearable technology. In conversations, he would regularly note when she was changing or contradicting statements made months earlier. “It was very unequal,” she says. “He had this perfect memory, and immediate access to it.”
Also, Maes adds, “You never knew what he was looking at, or what notes he was taking.”
But part of the attraction of new technology has always been that it affords us super-human abilities. It’s easy for most of us to imagine some circumstance in which it would be useful to have a snippet of information floating before our eyes, or a camera that’s ready to snap at any moment. If wearable displays like Google Glass can surmount the aesthetic and interpersonal hurdles — two enormous ifs — the MIT borgs may finally see their sci-fi lifestyle go mainstream.