NVIDIA has done a lot in the last decade to unlock the power of GPUs, with perhaps nothing as potent as its CUDA C language programming environment. With CUDA, software developers can more easily and efficiently write programs that tap the massively parallel architecture of NVIDIA’s GPUs to accelerate computational problems.
Now, the company is following up with CUDA 2.0, a new version, available free for download, that includes support for 32- and 64-bit Windows Vista and Mac OS X along with 3D textures and hardware interpolation—the goal being to increase the efficiency of such applications as medical imaging, product design, scientific research and oil and gas seismic computing. CUDA 2.0 also features an Adobe Photoshop plug-in example, which officials say delivers dramatic performance improvements by allowing developers to design plug-ins that move the most compute-intensive functions of Photoshop to the GPU, including filtering and image manipulation.
One of the more compelling and recent examples of CUDA’s power is Stanford University’s Folding@home distributed computing application. Folding@home combines the computing horsepower of millions of processors to simulate protein folding, which has become a major force in researching cures to life-threatening diseases such as cancer, cystic fibrosis and Parkinson’s disease.
Using CUDA, the Folding@home team developed a client specifically for NVIDIA GPUs, which has delivered more processing power than any other architecture in the history of the project, according to Stanford officials. NVIDIA GPUs are contributing over 1 petaflop of processing power to Folding@home, according to the statistics published by Stanford, and active NVIDIA GPUs deliver over 1.25 petaflops, or 42% of the total processing power of the application. NVIDIA’s petaflop contribution is delivered by just 11,370 of the total active processors used in the project compared to 208,268 active CPUs running Windows, which contribute 198 teraflops or 6% of the total processing power in the project.
NVIDIA and Stanford say that by running the Folding@home client on NVIDIA GPUs, protein-folding simulations can be done 140 times faster than on some of today’s traditional CPUs.
Truchard will be presented the award at the 2014 Golden Mousetrap Awards ceremony during the co-located events Pacific Design & Manufacturing, MD&M West, WestPack, PLASTEC West, Electronics West, ATX West, and AeroCon.
Robots that walk have come a long way from simple barebones walking machines or pairs of legs without an upper body and head. Much of the research these days focuses on making more humanoid robots. But they are not all created equal.
The IEEE Computer Society has named the top 10 trends for 2014. You can expect the convergence of cloud computing and mobile devices, advances in health care data and devices, as well as privacy issues in social media to make the headlines. And 3D printing came out of nowhere to make a big splash.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.