Microsoft Leverages FPGAs for Real-Time AI
Microsoft's Project Brainwave is a deep learning platform that uses the flexibility of field programmable gate arrays (FPGAs) to deliver high-performance artificial intelligence.
September 13, 2017
No technology is immune to the draw of customization, and neither is artificial intelligence. Microsoft believes the solution to delivering real-time AI, artificial intelligence that processes data immediately as it comes in, lies in FPGAs. While competitors like Google and Nvidia have entered the AI computing space with dedicated processors optimized for deep learning algorithms, Microsoft says that FGPAs allows it to deliver a cloud-based deep learning platform that is ultralow latency, more powerful, and more easily scalable than other solutions. The company has codenamed its new platform Project Brainwave.
Microsoft already enjoys a widely distributed FPGA infrastructure thanks to its data centers and services like its search engine, Bing. Using FPGAs Microsoft can run deep learning algorithms as a cloud-based service rather than running them via dedicated hardware. Project Brainwave is also set up to support some already popular deep learning frameworks like Google TensorFlow and, of course, Microsoft's Cognitive Toolkit.
When run on Intel Stratix 10 FPGAs Microsoft's Project Brainwave performed at 39.5 teraflops with less than one millisecond of latency. (Image source: Microsoft / Intel). |
In a blog for Microsoft, Doug Burger, a Distinguished Engineer at Microsoft, said that though hardened chips for processing deep neural networks (DNNs) have high peak performance, they have limited flexibility because their very design dictates the sort of operations they will be ideal for.
“ Project Brainwave takes a different approach, providing a design that scales across a range of data types, with the desired data type being a synthesis-time decision,” Burger wrote. “The design combines both the ASIC digital signal processing blocks on the FPGAs and the synthesizable logic to provide a greater and more optimized number of functional units.” According to Burger, Microsoft's approach, using the flexibility of FPGAs, allows for better performance and also easier upgradability.
Project Brainwave was announced at the 2017 Hot Chips Symposium on High Performance Chips, where it was demonstrated, using Intel Stratix 10 FPGAs, as being capable of a performance of 39.5 teraflops on a single request, with less than one millisecond of latency. “At that level of performance, the Brainwave architecture sustains execution of over 130,000 compute operations per cycle, driven by one macro-instruction being issued each 10 cycles. Running on Stratix 10, Project Brainwave thus achieves unprecedented levels of demonstrated real-time AI performance on extremely challenging models,” Burger wrote, “As we tune the system over the next few quarters, we expect significant further performance improvements.”
Microsoft's approach represents a deviation from what we've seen from other big names looking to dominate the AI space like Google and Nvidia. Nvidia, traditionally known for its high powered graphics processing units (GPUs) for computer gaming has made a big push into AI by leveraging the high performance of its GPUs over traditional CPUs for processing deep learning applications. In May of this year Nvidia debuted the Tesla V100, (no relation to the car company), a high-powered, GPU-based hardware platform for deep learning in data centers.
Not to be outdone, that same month Google unveiled what it calls its tensor processing unit (TPU), an ASIC optimized for running neural networks in data centers and for Google TensorFlow. Google says its TPU has far outperformed both CPUs and GPUs in studies.
Achieving high-powered, energy-efficient, and ultralow-latency performance will be critical as demand for real-time AI applications grows. While devices like collaborative robots can benefit from training-based AI, more dynamic systems like autonomous vehicles, virtual assistants, and smart security and defense systems need to be able to act on the fly and may also need to be easily adjustable. Microsoft currently plans to roll out Project Brainwave via its Azure cloud services, which may end up being a hinderance depending on how readily companies are wiling to jump onto Azure. However, as new solutions continue to emerge on all sides Microsoft's FGPA approach may just be novel and powerful enough to sway potential partners.
Chris Wiltz is a senior editor at Design News covering emerging technologies including VR/AR, AI, and robotics.
About the Author
You May Also Like