Field-programmable gate arrays (FPGAs), unlike CPUs and GPUs, do not have defined instruction sets or processing capabilities. Instead, they are reprogrammable silicon of logic gates that allow users to build custom processors to meet their exact needs. They also provide a hardware-timed execution speed that enables a high level of determinism and reliability that makes them especially suited for inline signal processing and system control. This increased performance, however, comes with the trade-off of increased programming complexity and the inability to change processing functionality in the middle of program execution.
Cloud computing is not a specific type of processor but a collection of computing resources accessible via the Internet. The power of cloud computing is that it frees users from having to purchase, maintain, and upgrade their own computing resources. Instead, they can rent just the processing time and storage space necessary for their applications. Cloud computing use has grown rapidly, with HP predicting that 76 percent of businesses will pursue some form of it within the next two years. However, while it does provide access to some of the most powerful computers in the world, cloud computing has the drawback of very high latency. Data must be transferred over the Internet, making it difficult to impossible to use in test systems that require deterministic processing capabilities. But cloud computing is still well suited for offline analysis and data storage.
Heterogeneous computing provides new and powerful computing architectures, but it also introduces additional complexities in test system development -- the most prevalent being the need to learn a different programming paradigm for each type of computing node. For instance, to fully use a GPU, programmers must modify their algorithms to massively parallelize their data and translate the algorithm math to graphics-rendering functions. With FPGAs, it often requires the knowledge of hardware description languages like VHDL to configure specific processing capabilities.
Fortunately, National Instruments has been watching this trend toward heterogeneous systems for the past decade (see 2011 NI Automated Test Outlook - Heterogeneous Computing) and has been investing in the development of software and hardware products that engineers can readily apply in their applications without low-level computer architecture or programming knowledge. Many companies have already experienced the significant performance gains from multicore computing and are beginning to experiment further with additional computing nodes in a heterogeneous computing architecture for test, measurement, and control applications.
It's truly amazing to see the results from engineers applying these state-of-the-art tools in their domain applications today. I encourage you to invest time to learn more about this exciting new technology and begin thinking about how you can take advantage of it in your next system.
About the AuthorRichard McDonell is the director for Americas strategic and technical marketing at National Instruments. His specific technical focus areas include modular test software and hardware system design, parallel test strategy, and instrument control bus technology. He holds a Bachelor’s degree in electrical engineering from Texas A&M University.
Thanks for the comments. It's great to see COTS technologies such as CPUs, GPUs, and FPGAs opening the doors for new levels of accessible heterogeneous computing architectures for engineers and scientists working on automated measurement and control systems. Historically this would have required experts in each processing domain to develop the individual pieces of the solution which is often time and cost prohibitive in automated measurement and control application areas. Having the proper system design software, as mentioned in prior comments, is key to assisting engineers who do not have processor specific development expertise. National Instruments LabVIEW (ni.com/labview) is a graphical system design environment for precisely this type of heterogeneous application development. In fact, thousands of engineers (and kids!) are already using it to develop advanced applications for everything from LEGO Mindstorms NXT robots to the CERN Large Hydron Collider beam control system.
This is an interesting development that will allow automation systems to leverage the availability even greater amounts of processing power. Software as control tasks can be distributed for more efficient use of system resources. Will be interesting to see breadth of applications and how main controllers and intelligent peripherals will be able to work together.I would think that the ability for the programmer to easily select among processing resources might be important, so they can manage the software project within a single tool. Definitely interesting development.
These aren't particularly new in embedded HPC (high-performance computing) and other high-end embedded systems for real-time computing apps like signal processing. It sounds like the practice is migrating downward toward more high-volume applications.
We've been seeing a lot of applications in the CAD and design tool world make use of some type of heterogeneous approach, in particular leveraging GPUs to optimize performance for highly intensive computational work. While the approach seems to be a sound one, I imagine the programming burden for learning new architectures is equally as challenging as for the software development side of the equation as it is for test, measurement, and control.
Using Siemens NX software, a team of engineering students from the University of Michigan built an electric vehicle and raced in the 2013 Bridgestone World Solar Challenge. One of those students blogged for Design News throughout the race.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.