With 90 percent of car accidents caused by driver error, the holy grail of auto safety is to replace drivers with machines that don’t get tired, drunk, distracted, or even stressed out. That’s the vision Google and others are trying to execute by 2017. While that target is just around the corner, new efforts at teaching computers about driving in cities in the last year and a half have brought huge breakthroughs in that effort.
We went for a ride with Google in its self-driving car, on a two-to-three-mile loop around its hometown of Mountain View, Calif., and watched the car negotiate stop signs, yellow lights, jaywalkers, speed bumps, and railroad tracks with nary a hiccup. (Yes, it will wait before crossing the tracks to make sure there is room on the other side to proceed.) The car has learned how to negotiate bicycles, blind spots behind trucks, and even construction zones. A few things it can’t yet do are pull over for approaching sirens or respond to a police officer directing traffic. And like most other urban drivers, it ignores turn signals on other vehicles until the other driver starts crowding into its lane and then it yields appropriately. (It does recognize hand signals from bicyclists.)
The biggest challenge Google engineers acknowledged is what they refer to as “socializing” the car with other drivers. For example, when it comes to a four-way stop sign simultaneously with three other cars, it needs to know who goes first. Google engineers have trained the car to cheat slowly forward to indicate to other drivers that it wants its turn—just like a human driver. But it doesn’t proceed unless all the others stay stopped.
When it runs into questionable situations, the human driver can assist; so far, automated cars are all legally required to have a human behind the wheel. With the cars we sample, there were actually two human drivers onboard: A driver behind the wheel and a technician with a laptop in the front passenger’s seat recording data to be fed back into the cars’ elaborately detailed maps to expand the cars’ capabilities. So far, the project director Chris Urmson and the lead software developer Dmitri Dolgov have logged about 200 continuous miles without human intervention. But as Urmson says, “If we log 200 miles without intervening, then we didn’t learn anything.”
If Google has come halfway to developing the autonomous car in the last five years, it still has a long way to go. And every incremental advance becomes harder. But once the car learns to drive in cities, Google engineers say, highway driving is easier to develop. Also, because every trip starts and ends on surface streets, a self-driving car that can drive in the city is more useful than one that can only take over control on the highway.
In typical Silicon Valley fashion, Google is speeding ahead with development of the autonomous car despite speed bumps that keep others plodding along. For example, General Motors and other automakers are working on a consortium to develop vehicle-to-vehicle and vehicle-to-infrastructure Wi-Fi communications to tell cars when to stop and anticipate the actions of others on the road.
The Google car relies on maps. One of the development challenges was to correctly identify traffic lights, for example. While the cameras mounted on the car have no problem telling a red from a green or yellow light, they have to know where to look. So the Google maps know exactly how high off the ground lights are mounted at every intersection, where the lane lines are, and where curbs are. (They don’t use Google Street View cars, because the data they produce is not detailed enough.) The Google car can’t drive autonomously where it doesn’t have dedicated maps. So far, engineers haven’t tried driving on snow, and rain and fog pose some limitations. Interestingly, Google says driving at night actually works better than during the day.
To “see,” the car relies on forward radars and cameras, much like those in advanced pre-collision and lane guidance systems on cars currently on the market, such as the Jeep Cherokee and Mercedes-Benz S-Class. But it adds a roof-mounted laser range finder that scans the road ahead, behind, and to the sides 60 times a second. The car tries to follow the most conservative course of action in responding to every situation, and it updates reactions in real time. It gathers and processes 30 to 40 megabytes of data per second.
Engineers are also working on smoothing out the car’s reactions. For example, in our brief, 25-minute drive, the car stopped fairly abruptly for one yellow light, accelerated slightly through another, then immediately had to brake for the jaywalkers. In another instance, it made a fairly abrupt turn into the entrance to a left turn lane. Passengers who get car sick wouldn’t be happy here—and in this car, everybody’s a passenger.
“All vehicle design in the past has been built on the assumption than every car has a human driver,” Larry Burns, an senior-level engineer who spent his career at General Motors, said. That affects everything from the shape of the car to where the engine goes. “With this car, that’s no longer true,” he said. He estimated that freeing drivers up to do more productive tasks, such as building relationships or creative projects using their cell phones, "could add $2 trillion to the economy.”
We think this technology could provide a huge benefit to road safety and to mobility, enabling elderly and disabled consumers to be independent and productive even when they can’t drive.
—Eric Evarts
Consumer Reports has no relationship with any advertisers or sponsors on this website. Copyright © 2006-2014 Consumers Union of U.S.