Data Fusion Platform Could Boost Performance of Autonomous Cars

New architecture sends all of the unfiltered data from cameras, radar and Lidar sensors directly to a central processor that makes the decisions.

A new, first-of-a-kind computing platform promises to cut cost, reduce power consumption, speed time-to-market, and boost the performance of autonomous driving systems.

Targeted at the highest levels of vehicle autonomy, the new platform departs from past self-driving technologies by employing “raw data fusion.” In essence, it sends all of the unfiltered data from cameras, radar and Lidar sensors directly to a single central processor that makes the decisions.

“We take the raw data directly from the sensors, eliminate the edge node data processing, and then fuse all the data in real time across all the different sensors,” noted Glenn Perry, vice president and general manager of Mentor Graphics Embedded Systems Division , which developed the new technology and which will be discussing  power sensitive embedded design at the Embedded Systems Conference next month. “It’s a really tough challenge. But we knew that if we could do it, we could get rid of the latency and reduce the cost.”

Known as the DRS360, the new platform is different from conventional advanced driving assistance systems (ADAS) because it eliminates the use of dedicated processors at every sensor. Up to now, every ADAS sensor (camera, radar, Lidar) has employed its own dedicated microcontroller (MCU) that filtered the data, and then relayed it to a separate module containing another processor that made the decisions. The result was multiple processors and delays of up to 100 msec as the filtered data bounced from one processor to another.

The DRS360 module consists of an FPGA (field programmable gate array), SoC (system on chip) and a microprocessor, as well as a communication framework that allows the devices to “talk” to each another. On top of that, it employs specialized software that enables it to capture, process and then fuse the data from all the sensors. (Source: Mentor Automotive)

“A couple of years ago, we looked at the existing architecture and said, ‘This isn’t going to work – it’s fundamentally flawed,’” Perry told Design News. “So we designed a new Level 5 system from a clean slate, and made it so it could scale down to the lower levels.”

The resulting solution is a centralized module consisting of an FPGA (field programmable gate array), SoC (system on chip), and a microprocessor, as well as a communication framework that allows the devices to “talk” to each another. On top of that, it employs specialized software that enables it to capture, process and then fuse the data from all the sensors. Mentor engineers say this approach reduces latency, cuts bill-of-material costs and consumes less power.

Surprisingly, Mentor claims that the new architecture not only eliminates the need for data processing at every sensors, but also reduces the amount of processing that takes place at the central processor. “We had a hunch that if we used a centralized raw data fusion set, we could apply machine learning algorithms that would actually reduce the need for compute power at the central node,” Perry said. “And that’s the way it has turned out.”

Mentor says the new technology can handle the computing chores for

Comments (2)

Please log in or register to post comments.
By submitting this form, you accept the Mollom privacy policy.
  • Oldest First
  • Newest First
Loading Comments...