Field-programmable gate arrays (FPGAs), unlike CPUs and GPUs, do not have defined instruction sets or processing capabilities. Instead, they are reprogrammable silicon of logic gates that allow users to build custom processors to meet their exact needs. They also provide a hardware-timed execution speed that enables a high level of determinism and reliability that makes them especially suited for inline signal processing and system control. This increased performance, however, comes with the trade-off of increased programming complexity and the inability to change processing functionality in the middle of program execution.
Cloud computing is not a specific type of processor but a collection of computing resources accessible via the Internet. The power of cloud computing is that it frees users from having to purchase, maintain, and upgrade their own computing resources. Instead, they can rent just the processing time and storage space necessary for their applications. Cloud computing use has grown rapidly, with HP predicting that 76 percent of businesses will pursue some form of it within the next two years. However, while it does provide access to some of the most powerful computers in the world, cloud computing has the drawback of very high latency. Data must be transferred over the Internet, making it difficult to impossible to use in test systems that require deterministic processing capabilities. But cloud computing is still well suited for offline analysis and data storage.
Heterogeneous computing provides new and powerful computing architectures, but it also introduces additional complexities in test system development -- the most prevalent being the need to learn a different programming paradigm for each type of computing node. For instance, to fully use a GPU, programmers must modify their algorithms to massively parallelize their data and translate the algorithm math to graphics-rendering functions. With FPGAs, it often requires the knowledge of hardware description languages like VHDL to configure specific processing capabilities.
Fortunately, National Instruments has been watching this trend toward heterogeneous systems for the past decade (see 2011 NI Automated Test Outlook - Heterogeneous Computing) and has been investing in the development of software and hardware products that engineers can readily apply in their applications without low-level computer architecture or programming knowledge. Many companies have already experienced the significant performance gains from multicore computing and are beginning to experiment further with additional computing nodes in a heterogeneous computing architecture for test, measurement, and control applications.
It's truly amazing to see the results from engineers applying these state-of-the-art tools in their domain applications today. I encourage you to invest time to learn more about this exciting new technology and begin thinking about how you can take advantage of it in your next system.
About the AuthorRichard McDonell is the director for Americas strategic and technical marketing at National Instruments. His specific technical focus areas include modular test software and hardware system design, parallel test strategy, and instrument control bus technology. He holds a Bachelorís degree in electrical engineering from Texas A&M University.
Thanks for the comments. It's great to see COTS technologies such as CPUs, GPUs, and FPGAs opening the doors for new levels of accessible heterogeneous computing architectures for engineers and scientists working on automated measurement and control systems. Historically this would have required experts in each processing domain to develop the individual pieces of the solution which is often time and cost prohibitive in automated measurement and control application areas. Having the proper system design software, as mentioned in prior comments, is key to assisting engineers who do not have processor specific development expertise. National Instruments LabVIEW (ni.com/labview) is a graphical system design environment for precisely this type of heterogeneous application development. In fact, thousands of engineers (and kids!) are already using it to develop advanced applications for everything from LEGO Mindstorms NXT robots to the CERN Large Hydron Collider beam control system.
This is an interesting development that will allow automation systems to leverage the availability even greater amounts of processing power. Software as control tasks can be distributed for more efficient use of system resources. Will be interesting to see breadth of applications and how main controllers and intelligent peripherals will be able to work together.I would think that the ability for the programmer to easily select among processing resources might be important, so they can manage the software project within a single tool. Definitely interesting development.
These aren't particularly new in embedded HPC (high-performance computing) and other high-end embedded systems for real-time computing apps like signal processing. It sounds like the practice is migrating downward toward more high-volume applications.
We've been seeing a lot of applications in the CAD and design tool world make use of some type of heterogeneous approach, in particular leveraging GPUs to optimize performance for highly intensive computational work. While the approach seems to be a sound one, I imagine the programming burden for learning new architectures is equally as challenging as for the software development side of the equation as it is for test, measurement, and control.
A customer who was thermal printing strip steel had a problem: When the strip's speed increased, the thermo printer would catch fire. When he set a flame to a piece of the strip, he couldn't get it to burn. What was the problem?
Engineers are well positioned to be the leaders for innovation in this new age of machines. Human-centered design enables us to manage the vast amounts of information and the power of technology in solving the worldís problems.