The Battle of AI Processors Begins in 2018: Page 2 of 4

2018 will be the start of what could be a longstanding battle between chipmakers to determine who creates the hardware that artificial intelligence lives on.

In March of this year the Intel made another high-profile AI acquisition in Mobileye, a developer of machine learning-based advanced driver assistance systems (ADAS), to the tune of about $15 billion. The significance of Intel's purchases made sense almost immediately. The chipmaker wanted to stake its claim in the autonomous vehicle space, and perhaps in doing so also establish itself as a key provider of machine learning-focused hardware.

In November at the Automobility LA trade show and conference in Los Angeles, Intel CEO Brian Krzanich called autonomous driving the biggest game changer of today as the company announced that its acquisition of Mobileye had yielded a new SoC, the EyeQ5, that boasted twice the deep learning performance efficiency of its closest competition – Nvidia's Xavier deep learning platform.

Tera Operations Per Second (TOPS) is a common performance metric used for high-performance SoCs. TOPS per watt extends that measurement to describe performance efficiency. The higher the TOPS per watt the better and more efficient a chip is. Deep Learning TOPS (DL) refers to the efficiency in performing deep learning-related operations. According to Intel's simulation-based testing the EyeQ5 is expected to deliver 2.4 DL TOPS per watt, more than double the efficiency of Nvidia's Xavier, which performs about 1 DL TOPS per watt.

Speaking with Design News, Doug Davis, senior vice president and general manager of Intel's Automated Driving Group (ADG), said that Intel chose to focus on DL TOPS per watt because it wanted to focus on processor efficiency over other metrics. “Focusing on DL Tops per Watt is really a good indicator of power consumption, but also if your thinking about it, it's also weight, cost, and cooling, so we really felt like efficiency was the important thing to focus on.” Davis said. “Think of electric vehicles [EVs], for example. With EVs it's all about range, but if my autonomous computing platform consumes too much power it reduces my range.”

Davis added, “There's always a lot of conversation around absolute performance, but when we looked at it we wanted to come at it from a more practical standpoint as we thought about different types of work loads. Deep learning is really key in being able to recognize objects and make decisions and do that as quickly and efficiently as possible.”

Nvidia has however disputed Intel's numbers, particularly given that the EyeQ5's estimates are based on simulations and the SoC won't be available for two years. In a statement to Design News, Danny Shapiro, Senior Director of Automotive at NVIDIA said: “We can't comment on a product that doesn't  exist and won't until 2020. What we know today is that Xavier, which we announced last year and will be available starting in early 2018, delivers higher performance at 30 TOPS compared to EyeQ5's purely simulated prediction of 24 TOPS two years from now.”

Are GPU's Destined for AI?

Which brings us to GPUs. Call it a happy accident or serendipity. But GPU makers have found themselves holding the technology that could be at the forefront of the AI revolution. Once thought of as a complementary unit to CPUs (many CPUs have GPUs integrated into them to handle graphics processing), GPUs have expanded outside of their graphics- and video-centric niche and into the domain of deep learning, where GPU manufactures say they offer a far superior performance over CPUs.

Nvidia says its Titan V GPU is the most powerful PC GPU ever developed for deep learning. (Image source: Nvidia)

While there are a handful of companies in the GPU market place, no company is more synonymous with the technology than Nvidia. According to a report by Jon Peddie Research Nvidia beat out both major competitors AMD and Intel with an overall 29.53% increase in GPU shipments in the third quarter of 2017. AMD's shipments increased by 7.63%, while Intel's increased by 5.01%. Naturally, this is mainly driven by the video gaming market, but analysts at Jon Peddie Research believe the demand for high-end performance in applications related to cryptocurrency mining also contributed to the shipments.

Demand for processors that can handle specific tasks that require high performance, things like cryptocurrency mining and AI applications, are exactly why GPUs are finding themselves at the forefront of the AI hardware conversation. GPUs contain hundreds of cores that can perform thousands of software threads simultaneously, all while being more power efficient than CPUs. Whereas CPUs are generalized and tend to jump around, performing a number of different tasks, GPUs excel at performing the same operation over and over again on huge batches of data. This key difference is what gives GPUs their namesake and makes them so adept at handling graphics – since graphics processing involves thousands of tiny calculations all happening at once. However, that same ability also makes GPUs ideal when you're talking about taking on tasks like the aforementioned neural network training.

Just this December Nvidia announced the Titan V, a PC-based GPU designed for deep learning. The new GPU is based on Nvidia's Volta architecture, which takes advantage of a new type of core technology that Nvidia calls Tensor Cores. In mathematical terms the dictionary defines a tensor as a “a mathematical object analogous to but more general than a vector, represented by an array of components that are functions of the coordinates of a space.” What Nvidia has done is develop cores with a complex architecture that is purpose-built for handling the demands of deep learning and neural network computing.

Comments (3)

Please log in or to post comments.
  • Oldest First
  • Newest First
Loading Comments...