There were a lot of great technologies and products discussed at DesignCon in San Jose, Calif. One topic, presented by AMD corporate vice president and CTO Joe Macri, which should be of key interest for engineers designing test, measurement, and control systems, is that of heterogeneous systems.
On the surface, the concept of heterogeneous systems may sound too far removed or futuristic for everyday life in the engineering system design trenches. However, similar to the advances in system performance from multicore processors in recent years, heterogeneous systems will drive even further performance with regard to measurements speed, loop rates, real-time processing, and less expensive systems.
In simple terms, the concept of heterogeneous systems is the use of multiple processing targets in your system architecture, e.g., processors, FPGAs, GPUs, etc. This type of computing architecture enables engineers to distribute data, processing, and program execution among different computing nodes that are each best suited to specific computational tasks. For example, an RF test system that uses heterogeneous computing may have a CPU controlling program execution with an FPGA performing inline demodulation and a GPU performing pattern matching before storing all the results on a remote server.
As you can imagine, this approach provides incredible software-defined flexibility to adapt to future performance and system size requirements. The most common heterogeneous systems in test, measurement, and control today primarily use multicore processors and FPGAs, while more systems are likely to include ARM processors and GPUs in the future. Below is a brief overview of these common targets used in heterogeneous test and control systems:
The central processing unit (CPU) is a general-purpose processor with a robust instruction set and cache, as well as direct access to memory. Sequential in its execution, the CPU is especially suited to program execution and can be adapted to almost any processing activities. Advances in the last decade have led to multiple computing cores on a single chip, with most processors running two to four cores and many more cores planned for the future. These multicore systems enable operations to occur in parallel, but require the programmer to implement a multi-threaded application with an eye toward parallelization to fully take advantage of these systems’ capabilities.
The graphics processing unit (GPU) is a specialized processor originally developed for the rendering of 2D and 3D computer graphics. The GPU has seen tremendous advances due to the need for more realistic graphics in computer video games. It achieves its performance by implementing a highly parallel architecture of hundreds to thousands of cores specifically suited to vector and shader transforms. Engineers are trying to adapt these specialized processing cores for use in general-purpose processing. Performance gains have already been seen with the use of GPUs in the areas of image processing and spectral monitoring.
Thanks for the comments. It's great to see COTS technologies such as CPUs, GPUs, and FPGAs opening the doors for new levels of accessible heterogeneous computing architectures for engineers and scientists working on automated measurement and control systems. Historically this would have required experts in each processing domain to develop the individual pieces of the solution which is often time and cost prohibitive in automated measurement and control application areas. Having the proper system design software, as mentioned in prior comments, is key to assisting engineers who do not have processor specific development expertise. National Instruments LabVIEW (ni.com/labview) is a graphical system design environment for precisely this type of heterogeneous application development. In fact, thousands of engineers (and kids!) are already using it to develop advanced applications for everything from LEGO Mindstorms NXT robots to the CERN Large Hydron Collider beam control system.
This is an interesting development that will allow automation systems to leverage the availability even greater amounts of processing power. Software as control tasks can be distributed for more efficient use of system resources. Will be interesting to see breadth of applications and how main controllers and intelligent peripherals will be able to work together.I would think that the ability for the programmer to easily select among processing resources might be important, so they can manage the software project within a single tool. Definitely interesting development.
These aren't particularly new in embedded HPC (high-performance computing) and other high-end embedded systems for real-time computing apps like signal processing. It sounds like the practice is migrating downward toward more high-volume applications.
We've been seeing a lot of applications in the CAD and design tool world make use of some type of heterogeneous approach, in particular leveraging GPUs to optimize performance for highly intensive computational work. While the approach seems to be a sound one, I imagine the programming burden for learning new architectures is equally as challenging as for the software development side of the equation as it is for test, measurement, and control.
Iterative design — the cycle of prototyping, testing, analyzing, and refining a product — existed long before additive manufacturing, but it has never been as efficient and approachable as it is today with 3D printing.
People usually think of a time constant as the time it takes a first order system to change 63% of the way to the steady state value in response to a step change in the input -- it’s basically a measure of the responsiveness of the system. This is true, but in reality, time constants are often not constant. They can change just like system gains change as the environment or the geometry of the system changes.
At its core, sound is a relatively simple natural phenomenon caused by pressure pulsations or vibrations propagating through various mediums in the world around us. Studies have shown that the complete absence of sound can drive a person insane, causing them to experience hallucinations. Likewise, loud and overwhelming sound can have the same effect. This especially holds true in manufacturing and plant environments where loud noises are the norm.
The tech industry is no stranger to crowdsourcing funding for new projects, and the team at element14 are no strangers to crowdsourcing ideas for new projects through its design competitions. But what about crowdsourcing new components?
It has been common wisdom of late that anything you needed to manufacture could be made more cost-effectively on foreign shores. Following World War II, the label “Made in Japan” was as ubiquitous as is the “Made in China” version today and often had very similar -- not always positive -- connotations. Along the way, Korea, Indonesia, Malaysia, and other Pacific-rim nations have each had their turn at being the preferred low-cost alternative to manufacturing here in the US.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.