There were a lot of great technologies and products discussed at DesignCon in San Jose, Calif. One topic, presented by AMD corporate vice president and CTO Joe Macri, which should be of key interest for engineers designing test, measurement, and control systems, is that of heterogeneous systems.
On the surface, the concept of heterogeneous systems may sound too far removed or futuristic for everyday life in the engineering system design trenches. However, similar to the advances in system performance from multicore processors in recent years, heterogeneous systems will drive even further performance with regard to measurements speed, loop rates, real-time processing, and less expensive systems.
In simple terms, the concept of heterogeneous systems is the use of multiple processing targets in your system architecture, e.g., processors, FPGAs, GPUs, etc. This type of computing architecture enables engineers to distribute data, processing, and program execution among different computing nodes that are each best suited to specific computational tasks. For example, an RF test system that uses heterogeneous computing may have a CPU controlling program execution with an FPGA performing inline demodulation and a GPU performing pattern matching before storing all the results on a remote server.
As you can imagine, this approach provides incredible software-defined flexibility to adapt to future performance and system size requirements. The most common heterogeneous systems in test, measurement, and control today primarily use multicore processors and FPGAs, while more systems are likely to include ARM processors and GPUs in the future. Below is a brief overview of these common targets used in heterogeneous test and control systems:
The central processing unit (CPU) is a general-purpose processor with a robust instruction set and cache, as well as direct access to memory. Sequential in its execution, the CPU is especially suited to program execution and can be adapted to almost any processing activities. Advances in the last decade have led to multiple computing cores on a single chip, with most processors running two to four cores and many more cores planned for the future. These multicore systems enable operations to occur in parallel, but require the programmer to implement a multi-threaded application with an eye toward parallelization to fully take advantage of these systems’ capabilities.
The graphics processing unit (GPU) is a specialized processor originally developed for the rendering of 2D and 3D computer graphics. The GPU has seen tremendous advances due to the need for more realistic graphics in computer video games. It achieves its performance by implementing a highly parallel architecture of hundreds to thousands of cores specifically suited to vector and shader transforms. Engineers are trying to adapt these specialized processing cores for use in general-purpose processing. Performance gains have already been seen with the use of GPUs in the areas of image processing and spectral monitoring.
We've been seeing a lot of applications in the CAD and design tool world make use of some type of heterogeneous approach, in particular leveraging GPUs to optimize performance for highly intensive computational work. While the approach seems to be a sound one, I imagine the programming burden for learning new architectures is equally as challenging as for the software development side of the equation as it is for test, measurement, and control.
These aren't particularly new in embedded HPC (high-performance computing) and other high-end embedded systems for real-time computing apps like signal processing. It sounds like the practice is migrating downward toward more high-volume applications.
This is an interesting development that will allow automation systems to leverage the availability even greater amounts of processing power. Software as control tasks can be distributed for more efficient use of system resources. Will be interesting to see breadth of applications and how main controllers and intelligent peripherals will be able to work together.I would think that the ability for the programmer to easily select among processing resources might be important, so they can manage the software project within a single tool. Definitely interesting development.
Thanks for the comments. It's great to see COTS technologies such as CPUs, GPUs, and FPGAs opening the doors for new levels of accessible heterogeneous computing architectures for engineers and scientists working on automated measurement and control systems. Historically this would have required experts in each processing domain to develop the individual pieces of the solution which is often time and cost prohibitive in automated measurement and control application areas. Having the proper system design software, as mentioned in prior comments, is key to assisting engineers who do not have processor specific development expertise. National Instruments LabVIEW (ni.com/labview) is a graphical system design environment for precisely this type of heterogeneous application development. In fact, thousands of engineers (and kids!) are already using it to develop advanced applications for everything from LEGO Mindstorms NXT robots to the CERN Large Hydron Collider beam control system.
In a world that's going green, industrial operations have a problem: Their processes involve materials that are potentially toxic, flammable, corrosive, or reactive. If improperly managed, this can precipitate dangerous health and environmental consequences.
Government regulations, coupled with growing consumer sensitivity about data and identity theft, require that data storage organizations demonstrate proper protection and due diligence in protecting sensitive information stored inside datacenter enclosures.
When a crane doesn't have a monitoring system, crane owners schedule service every six months and simply scrap the parts they replace, even if a part has had little use and doesn't need replacing. This can cost thousands.
A quick look into the merger of two powerhouse 3D printing OEMs and the new leader in rapid prototyping solutions, Stratasys. The industrial revolution is now led by 3D printing and engineers are given the opportunity to fully maximize their design capabilities, reduce their time-to-market and functionally test prototypes cheaper, faster and easier. Bruce Bradshaw, Director of Marketing in North America, will explore the large product offering and variety of materials that will help CAD designers articulate their product design with actual, physical prototypes. This broadcast will dive deep into technical information including application specific stories from real world customers and their experiences with 3D printing. 3D Printing is