As the autonomous car evolves, automakers face a complex question: How to enable self-driving cars to process massive amounts of data and then come to logical and safe conclusions about it.
Today, most automakers accomplish that with a distributed form of data processing. That is, they place intelligence at the sensors. More recently, though, that’s begun to change. Many engineers now favor a more centralized form of data processing, in which simple sensors send raw unprocessed data to a powerful central processor, which does all the “thinking.”
To learn more about distributed and centralized architectures, Design News talked with Davide Santo, an engineering veteran of Motorola and Freescale Semiconductor, and now the director of the Autonomous Driving Lab for NXP Semiconductors . Here, Santo offers his views on the topic.
Davide Santo of NXP: “It’s clear to me that there needs to be a centralized function for the planning phase – planning means path-finding, maneuvering and motion trajectory.” (Source: NXP Semiconductors)
DN: Let’s start with definitions. Could you define distributed and centralized autonomous vehicle architectures for us?
SANTO: It dates back to a definition proposed by the US Department of Defense Laboratories in 1999. Essentially, that definition was limited to sensor fusion. Distributed meant that every sensor node knew what every other node was doing. And centralized meant that there was only one central point that collected all the information and created the sensor fusion map.
DN: There’s also a solution that’s a hybrid of those two electrical architectures. How does that work?
SANTO: The hybrid concept is a middle solution. There’s a central unit that works at a higher abstraction level. And there are domains. The domains can work geographically, for example, in the front and back of the car. Or they can be based on cameras and sensors.
DN: What’s been the primary solution to date?
SANTO: Up to now, systems were distributed because there was not a real centralized solution in the market. But today, because of the computing capabilities of companies like Nvidia, it’s entirely possible to do a centralized architecture.
DN: What are the advantages and disadvantages of using a distributed architecture?
SANTO: The advantage might be that you don’t have to bring in a huge amount of data. You don’t have the problem of carrying data in a secure and efficient way from the edge to the center. And you can effectively put things together in the most cost efficient way.
The negative aspect is that you have to distribute the information simultaneously and synchronize it across all the nodes. And this has become practically impossible when you exceed three or four nodes.
DN: What are the advantages and disadvantages of a centralized architecture?
SANTO: You get the best possible information. If you don’t touch the data, don’t modify it, don’t filter it at the edge, then you get the maximum possible information.
The disadvantage is that your center becomes a monster. It’s huge. You have to move data from as many as 12 cameras with four megapixels each, so you’re moving gigabytes. And