New System Allows Self-Driving Cars to Make More Human Decisions

An enhanced end-to-end navigation system designed by researchers at MIT can help driverless cars navigate roads they’ve never seen before with reasoning similar to how humans drive.

Common worries associated with self-driving car technology are that it won’t be able to navigate safely or accurately when working autonomously. To help solve this problem, researchers now have created a new system that brings human reasoning into the self-driving equation.

Researchers at MIT have developed new technology to bring human reasoning into the decision-making of self-driving cars to increase accuracy and safety. (Image source: Chelsea Turner)

A team of scientists at MIT have created a system using simple maps and visual data to enable driverless cars to navigate routes in new, complex environments. The technology uses machine learning that allows an autonomous control system to “learn” the steering patterns of human drivers on roads and later imitate the driver when having to navigate without human intervention, researchers said.

“Our objective is to achieve autonomous navigation that is robust for driving in new environments,” Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and a professor of electrical engineering and computer science, said in a press statement.

She cited an example in an autonomous vehicle is trained to drive in an urban setting, such as the streets of Cambridge, Mass., where MIT is located. “The system should also be able to drive smoothly in the woods, even if that is an environment it has never seen before,” Rus said.

The Not-Human Condition

The complexity of current driverless car technology is basically, that it isn’t human, and can’t rely on the thought processing a human driver uses when navigating using GPS to determine location and where to go next when traveling to a destination.

In a driverless car, this task requires the car to map and analyze new roads, which is time consuming. The maps the systems use—typically generated by 3D scans—also are complex because they are computationally intensive. This means they can’t be processed on the fly, limiting navigation.

To overcome such limitations and help the automated system make more human-like decisions, the MIT team improved upon an existing end-to-end navigation system the team already had developed, training it to drive from a goal to a destination in a previously unseen environment, researchers said.

To achieve this, they enabled the system to predict full probability distribution over all possible steering commands at any given instant while driving, said Alexander Amini, an MIT graduate student who worked on the research.

“With our system, you don’t need to train on every road beforehand,” he said in a press statement. “You can download a new map for the car to navigate through roads it has never seen before.”

Learning for Driving Accuracy

The system works by leveraging a machine-learning model called a convolutional neural network (CNN). CNN already is commonly used for image recognition, an aspect that comes into play in its new application, researchers said.

During training, the system watches and learns how to steer from a human driver, correlating rotations of the steering wheel to road curvatures it observes through cameras and an inputted map, researchers said. This allows it to eventually learn the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries, they said.

Similarly to human drivers, the system also can detect any discrepancy between its map and features of the road so it can better determine if its position, sensors, or mapping are incorrect and make adjustments, if necessary, researchers said.

The team presented a paper on its work at the International Conference on Robotics and Automation in May.

Researchers hope their system will be used to make driverless car technology safer, more accurate, and less prone to failures that could create problems, they said.

“In the real world, sensors do fail,” Amini said in the press statement. “We want to make sure that the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localize itself correctly on the road.”

Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco and New York City. In her free time she enjoys surfing, traveling, music, yoga and cooking. She currently resides in a village on the southwest coast of Portugal.

 

 

Comments (0)

Please log in or to post comments.
  • Oldest First
  • Newest First
Loading Comments...