John Leonard still remembers his first brushes with Boston’s traffic.
The year was 1991, and he had just arrived from Philadelphia, via Oxford, for postdoctoral study at the Massachusetts Institute of Technology. Where Leonard came from, drivers exhibited their brotherly love by letting others merge and by signaling when they turned. The roads were not under construction by default, and traffic lanes were generally clearly marked.
Then he tried to find his way from the Longfellow Bridge to Storrow Drive.
“It took me about six months to figure out which was the right lane,” Leonard recalled.
Now a professor of robotics at MIT, Leonard has something of a professional interest in how driving in Boston might be negotiated by a robot — say, for example, the self-driving car under development by Google — when a smart guy like him had trouble figuring it out. Last week, he even had the chance to take a ride in Google’s car when he went to San Francisco for a conference on automated vehicles.
“I expected to be impressed, because the people involved are so good,” he said. Although the car he was in encountered many cyclists and pedestrians, Leonard said the ride was as smooth as one with a professional human driver, with the car staying in its lane and negotiating minor road construction.
“I had total confidence in the car,” he said.
‘Boston drivers seem to do the unexpected more than other drivers.’
But Leonard said it could be a while before the vehicle ventures onto the treacherous streets of Boston. He said the cars need to be tested under a number of different and difficult driving conditions, so that Google’s technology can prove itself in the absence of highly precise maps, like the kind it uses to get around Mountain View, Calif., and common driver aids like lane markers, road signs, and uniform intersections.
“Robots should just be able to navigate the way we navigate,” said Leonard, who has designed ocean-going and land-bound robots. “That’s been a challenge in robotics for 40 years.”
Like many other scientists, Leonard was excited about the prospect of a driverless car even before Google launched its effort in 2009. But Leonard is so curious about the technology that last year he filmed numerous expeditions from behind the wheel to document the obstacles Google faces, especially in a place as notoriously difficult to drive in as Boston.
How, for example, would the car know where to go when heavy snow obscures road markers, signs, and even parked cars? What to make of the erratic hand gestures from a police officer directing traffic around road construction? And maybe most important, what to do about that car in the next lane over, which is about to pull the kind of stunt that drives people to take public transit?
“Dealing with the unexpected is a challenge,” Leonard said. “Boston drivers seem to do the unexpected more than other drivers.”
Unlike those with a human driver, Google cars do not rely on sight and sound. Instead, a number of technologies are used to get the car from point A to point B without hitting anything — or anyone. One wheel has an embedded sensor that precisely locates the car’s position, and the car has radar and a roof-mounted laser mapping device, which detect nearby objects such as cars, guardrails, curbs, lane markers, pedestrians, and cyclists. A video camera reads traffic lights and helps recognize obstacles.
A robust software program digests all this data and moves the car accordingly.
Google engineers say their car is designed to be a defensive driver, so it brakes when cyclists cut in front of it or jaywalkers dash across the street. It is even programmed to replicate many of the unwritten rules of road conduct, such as inching forward at stop signs the way human drivers do.
These technologies have their limits, though, especially under the varied road conditions common across the United States. Many streets have faded or obscured lane markings, for example, which could confuse a self-driving car. And although the laser mapping is highly accurate, the device has a shorter range than radar, and it has trouble in tough weather, such as heavy rain or fog.
The technology’s biggest challenge could be the humans around it.
Leonard demonstrated during a drive around downtown Boston last month, issuing rapid-fire commentary about how a computer might interpret the actions of the cars around us.
“You have one car going left, you have another car going left, this car — ” Leonard said, pointing to a car rolling along in the parking lane to our right at about 10 miles per hour — “are they stopping, are they not? That’s a challenge, for a robot to be decisive.”
Leonard knows the consequences of robot indecisiveness firsthand. In 2007, he was a member of a team from MIT competing at the DARPA Urban Challenge, a road race involving robotic cars that served as a recruiting ground for Google’s self-driving car project. At the event, the car from a team at Cornell University stopped in the middle of a turn, only to accelerate when the MIT car tried to pass on its left. The resulting collision was one of the first robot-on-robot fender benders in history.
Then there are the driving situations that seem particular to Boston.
In his tour around town, Leonard steered his car off Storrow Drive and encountered one of those everyday interruptions that most veteran drivers would confidently steer around: a truck idled in one lane, with a worker picking up orange cones scattered across the lane and a police officer redirecting traffic with an indistinct flopping-arm motion.
That is one area where Google’s engineers have acknowledged their self-driving car could not cope. Although the car understands that a cyclist who extends his arm is about to turn, ambiguous movements like hand gestures from traffic cops are still difficult for the car to comprehend, Google engineers have said in published accounts.
Indeed, a Google spokesman said that urban driving is more hazardous than highway driving, owing to the presence of cyclists and pedestrians and the higher incidence of erratic behavior and circumstances in which a human driver would make decisions on the fly.
Leonard is quick to note that driverless cars have made huge progress in just a few years and is excited for the future of Google’s project, especially because the company has assembled some of leading scientists from the fields of robotics, machine learning, and engineering.
He is just trying to be realistic about how soon the self-driving car takes its place on American roads.
“They do call this a moon shot for a reason,” Leonard said. “It’s really hard to put a timeline on it.”Jack Newsham can be reached at firstname.lastname@example.org.