The sea change in computing capabilities over the past decade is that compute cycles have become essentially free. So whereas circa 1995, a PC cost more than $1,000 and its processing power was limited, today you can build your own quad-core box with a 3-GHz processor for under $1k. Such a machine can rapidly render video, as well as speedily run complex FEA and CFD simulations. In such a context, HPC really no longer refers (or should no longer refer) to a demarcation in compute capability, but rather to the OS (Microsoft HPC etc). I realize that I'm a little ahead of the curve here, and that's not the common usage, and indeed there are still workstations which go above and beyond the quad core of my example. But as they say on the street corner, I'm just sayin...
Good point. HPC is now more readily being used to denote a certain level of compute horsepower, which traditionally has only been available in special compute clusters locked away in some room somewhere. As a central resource, engineers and other users typically have had to put their jobs in queue, which could take days or even weeks before their processing needs were handled.
NVIDA Maximus (along with many other emerging technologies) is attempting to change that use case, putting HPC-level computing power on the desktop workstation platform and freeing up resources so that same workstation can be used for other tasks while the simulation or rendering job grinds away in the background.
HPC on the desktop is, indeed, a sea change. It's also invading, if that's the right word, machine vision. Standalone machine vision systems based on powerful controller boxes are a trend, including at least one based on HPC:
Mision vision is a great application for HPC as is simulation and high-end rendering. As the cost comes down and more enabling technologies come into play, I think we'll see even more headway and a greater variety of applications that weren't possible on a desktop platform.