However, Taub says tomorrow's vehicles won't need to add much computing capability to make it all happen. Given the prevalence of today's complex safety systems, much of the computing power is already in place, he said.
"Each sensor will have its own smarts," he said. "And then all the information from the sensors will be sent to a central processor that will do the integration and fuse it into a single level of situational awareness. But you won't need supercomputers. It's a distributed network, and we think it's doable."
In the beginning, autonomous cars will be "sensor-intensive." They'll employ radar, LIDAR (light detection and ranging), ultrasound, and camera-based sensors. Such subsystems, working with central processors and highly developed software algorithms, will endow the vehicles with the full, 360° situational awareness that vehicle developers seek.
Eventually, some of the sensors will be augmented or even replaced by on-board vehicle-to-vehicle and vehicle-to-infrastructure communication systems. Those systems will enable the vehicles to communicate silently with one another, as well as with stop lights, road signs, and virtually everything else that matters. As a result, the vehicles will get the situational awareness they need without the high cost of lasers.
To a small degree, vehicle autonomy may already be happening around us. The now-famous Google automated cars have logged more than 140,000 miles, including drives on such well-known venues as Hollywood Boulevard, Lombard Street in San Francisco, the Golden Gate Bridge, and the Pacific Coast Highway.
Taub says that much of the technology is already in place, and that production vehicle manufacturers are already using some of it. Adaptive cruise control and lane-keeping technologies are popping up on vehicles. And accident avoidance -- the ability to commandeer the brakes and steering wheel -- is coming very soon. Those features, he said, lay the groundwork for complete autonomy.
For Further Reading