A new chip interconnect technology offers promise for autonomous car manufacturers who want to employ machine learning in their vehicles.
Ncore 3 Cache Coherent Interconnect IP enables chip designers to create system-on-chip (SoC) devices that mix central processors (CPUs), graphics processors (GPUs), digital signal processors (DSPs), and hardware accelerators, all of which are being increasingly employed in self-driving vehicles. “This allows you to build the supercomputer-on-a-chip technology that’s needed to implement real-time machine learning,” said Kurt Shuler, vice president of marketing for ArterisIP, an intellectual property (IP) vendor and developer of the Ncore 3 interconnect IP. ArterisIP will have experts on hand at the upcoming ARM TechCon 2017 to discuss the technology.
Ncore 3 Cache Coherent Interconnect IP allows simulatenous use of the AMBA CHI and ACE IP protocols. It also integrates with other chips using a CCIX controller to create multi-die coherent systems. (Source: ArterisIP)
The new interconnect technology, introduced at the recent 2017 Linley Processor Conference, is especially important now, given the growing trend toward the use of machine learning in vehicles. To endow a vehicle’s computers with the ability to learn, engineers increasingly need more powerful processors, and they need those processors out near the sensors, as well as at the vehicle’s central computer.
“In each one of those processing nodes, whether it’s out by the camera or in by the executive (computing) function, you need chips with multiple processing elements – essentially supercomputers-on-a-chip,” Shuler said. Often, those chips may incorporate up to a half-dozen different types of programmable hardware, Shuler added.
Indeed, some high-end chips may now have five to ten CPUs and as many as 20 hardware accelerators, Shuler said. The chip’s interconnects tie those disparate elements together.
The key to unifying the performance of those elements is the use of so-called “cache coherence,” Shuler said. By bringing cache coherence to an interconnect, Ncore 3 provides a simpler way for the various elements within that supercomputer-on-a-chip to share data. “If you don’t have cache coherence, it becomes more difficult to program,” Shuler said. “Cache coherence gives you one common view of the memory for the whole system.”
At the same time, Shuler said, Ncore 3 enables chip designers to mix two different types of communication bus protocols. ARM Ltd.'s ACE IP, a legacy cache coherent communication protocol, can be used alongside AMBA CHI, a newer protocol from ARM. “Our customers want to be able to use them together, and up to now it’s been difficult to do that,” Shuler said. “Ncore makes it easy for them to use both.”
Ncore 3 also integrates a controller based on the CCIX standard, which permits cache coherent connection of multiple dies with differing instruction sets, be they FPGAs, GPUs, ASICs, or other elements.
The emphasis on cache coherence will be critical going forward, not only in machine learning for autonomous cars, but also for network processing and 5G wireless technology, experts say. All of those applications will need the computing power provided by a mixture of different processing technologies. “If you’re doing 20 processing elements on a chip, and you want to be able to write software for it, you can’t do it without cache coherence,” Shuler told us.
Senior technical editor Chuck Murray has been writing about technology for 33 years. He joined Design News in 1987, and has covered electronics, automation, fluid power, and auto.
Join 4,000+ embedded systems specialists for three days of ARM ecosystem immersion you can’t find anywhere else. ARM TechCon. Oct. 24-26, 2017 in Santa Clara, CA. Register here for the event, hosted by Design News’ parent company UBM.