|(Image source: Nvidia)|
Nvidia has unveiled the Jetson AGX Xavier, the latest in its family of GPU-based computing platforms. In the past, the company's Jetson AGX (Autonomous Machines GPU Accelerated) family has focused on general machine learning and artificial intelligence processing or specifically on automotive. This latest platform is focused on providing edge-based AI to robotics, allowing robots to handle AI processing locally rather than having to rely on a cloud-based system or remote server.
The company is touting Xavier as the world's first computer designed specifically for robotics and edge computing. Xavier is capable of handling a number of complex functions including sensor fusion, visual odometry, localization and mapping, obstacle detection, and path planning—all critical functions for collaborative robots as well as machines for package delivery, industrial inspection, and consumer-facing tasks, such as retail assistance.
At the heart of the 100 x 87-mm-sized Xavier platform is a 512-core Nvidia Volta GPU, which includes 64 TensorCores—programmable units optimized for handling training and inference necessary for deep learning processing. An 8-core NVIDIA Carmel ARM v8.2 64-bit CPU along with 16GB of 256-bit LPDDR4x memory and deep learning and vision accelerators accompany the GPU. At peak performance, the Xavier can operate at 32 TeraOPS (TOPS) with 750Gbps of high-speed I/O. It consumes as little as 10W. (It can also be configured for 15W and 30W, depending on the application.)
As expected, given its range of intended robotics applications, the platform comes with several flavors of I/O—accommodating the camera (up to six active sensor streams), HD display, Ethernet, USB, PCIe, CAN, and several miscellaneous I/Os including UART, SPI, I2C, I2S, and GPIOs.
Until a worldwide rollout of 5G, the level of connectivity meant to facilitate many higher order autonomous machines (self-driving cars included) is just not feasible with current remote and cloud-based systems. As such, many companies have been looking toward edge-based solutions to place all the processing power needed for sophisticated deep learning applications directly into machines.
If cloud-based services like Alexa and Siri represent more of a hivemind, with one service handling AI processing for all the various nodes (robots, cars, machines, etc.), think of edge-based AI as the opposite: a system of individuals capable of coming up with their own solutions and conclusions.
The Xavier platform comes in the wake of other tech companies announcing their own solutions in the AI edge space. Earlier this year, NXP Semiconductors rolled out a software-based edge solution designed to help engineers understand what machine learning capabilities any specific hardware (even if it is not a high-level chipset) can perform. Rather than industrial and robotics applications, however, NXP's solution looks to be aimed more at smart home devices, such as doorbells, that could benefit from a bit of machine learning to add functionality.
Not long after NXP's announcement, Google unveiled its Edge TPU, a purpose-built chip designed for edge computing of AI designed specifically around Google's proprietary Tensor Processing Units (TPUs). Google has yet to release official specs of the chip, however, and has not announced any developer partnerships around it at this point.
The Nvidia Jetson AGX Xavier is now available. The platform supports a full AI software stack based on Nvidia Jetpack, which includes a board support package (BSP), the Ubuntu Linux operating system, NVIDIA CUDA, cuDNN, and TensorRT software libraries for GPU-based deep learning, as well as Nvidia's DeepStream SDK for video analytics to provide real-time situational awareness.
Chris Wiltz is a Senior Editor at Design News covering emerging technologies including AI, VR/AR, and robotics.