As the autonomous car evolves, automakers face a complex question: How to enable self-driving cars to process massive amounts of data and then come to logical and safe conclusions about it.
Today, most automakers accomplish that with a distributed form of data processing. That is, they place intelligence at the sensors. More recently, though, that’s begun to change. Many engineers now favor a more centralized form of data processing, in which simple sensors send raw unprocessed data to a powerful central processor, which does all the “thinking.”
To learn more about distributed and centralized architectures, Design News talked with Davide Santo, an engineering veteran of Motorola and Freescale Semiconductor, and now the director of the Autonomous Driving Lab for NXP Semiconductors. Here, Santo offers his views on the topic.
Davide Santo of NXP: “It’s clear to me that there needs to be a centralized function for the planning phase – planning means path-finding, maneuvering and motion trajectory.” (Source: NXP Semiconductors)
DN: Let’s start with definitions. Could you define distributed and centralized autonomous vehicle architectures for us?
SANTO: It dates back to a definition proposed by the US Department of Defense Laboratories in 1999. Essentially, that definition was limited to sensor fusion. Distributed meant that every sensor node knew what every other node was doing. And centralized meant that there was only one central point that collected all the information and created the sensor fusion map.
DN: There’s also a solution that’s a hybrid of those two electrical architectures. How does that work?
SANTO: The hybrid concept is a middle solution. There’s a central unit that works at a higher abstraction level. And there are domains. The domains can work geographically, for example, in the front and back of the car. Or they can be based on cameras and sensors.
DN: What’s been the primary solution to date?
SANTO: Up to now, systems were distributed because there was not a real centralized solution in the market. But today, because of the computing capabilities of companies like Nvidia, it’s entirely possible to do a centralized architecture.
DN: What are the advantages and disadvantages of using a distributed architecture?
SANTO: The advantage might be that you don’t have to bring in a huge amount of data. You don’t have the problem of carrying data in a secure and efficient way from the edge to the center. And you can effectively put things together in the most cost efficient way.
The negative aspect is that you have to distribute the information simultaneously and synchronize it across all the nodes. And this has become practically impossible when you exceed three or four nodes.
DN: What are the advantages and disadvantages of a centralized architecture?
SANTO: You get the best possible information. If you don’t touch the data, don’t modify it, don’t filter it at the edge, then you get the maximum possible information.
The disadvantage is that your center becomes a monster. It’s huge. You have to move data from as many as 12 cameras with four megapixels each, so you’re moving gigabytes. And you have to move radar data, so you’re moving gigabytes again. You end up having this huge amount of data that comes in at a high frequency rate, and it has to be processed. Your machine at the center is non-scalable, and when you don’t scale, you can’t offer capabilities for the long term, which will be needed in automotive.
DN: As we move closer to actual vehicle autonomy, is one or the other starting to emerge as a leader?
SANTO: It’s clear to me that there needs to be a centralized function for the planning phase – planning means path-finding, maneuvering and motion trajectory. It’s not the end-to-end (centralized architecture) that Nvidia wants to have. We’re still going to have to have intelligent sensors that can reduce the bandwidth and optimize the cost somewhere between the edge and the center.
DN: So you’re suggesting that the hybrid architecture is the future? Does NXP see this as the solution?
SANTO: In the future, we believe hybrid will be the path because there is always the need to process close to the sensor, whether it’s for cameras, or antennas for radar, or cloud point analysis. At the same time, there will always be a need for a centralized place where all the local maps will be brought together to complete the centralized model.
DN: What does that mean for the future of automotive sensors?
SANTO: The sensor will become a little less intelligent, but it will not be a stupid sensor. It will definitely keep on doing important operations.
It’s very naïve to think we can do everything centralized. There’s so much you can do to make the sensor better and more useful for Level 3, Level 4 and Level 5 [autonomous] vehicles.
DN: Wouldn’t it be in the automaker’s best interest to go with a distributed system? That way, a lot of the development work could be offloaded to the suppliers.
SANTO: That’s exactly right. The question is, does the OEM want that? How does the OEM control a completely distributed system? They don’t. It puts them totally in the hands of the Tier One, with no chance of controlling it themselves.
The problem is it’s very difficult to control a distributed system. In order to make it work, you need to agree on languages, formats, protocols, and networking. It’s super tough. If the OEM could force their suppliers to do all that, they’d have a good life. But I doubt they can force all of the Tier Ones to do the same type of modeling, the same type of mapping, the same type of algorithms. The Tier Ones need to compete, and to do that, they have to offer differences.
DN: As we approach Level 5, will a standard be necessary?
SANTO: I hope for it. It happened in avionics. But for the automotive market, it’s going to be tougher. A little bit of agreement is needed, but it’s probably not feasible today.