High-performance computers are vital to solving some of the world’s critical problems in science, engineering and security — including cancer research, global warming, drug discovery and many others. The models of the phenomena of these high-stakes problems have simply outgrown the capabilities of desktop computers.
That’s because Moore’s Law has arguably stalled for single processor computers. The biggest leaps in computing power will come not from increasing a processor’s speed, but from making multiple processors work in parallel. Yesterday’s computational luxury — parallel processing — is today’s necessity. Scientists and engineers desperately need parallel processing computers for their number-crunching, whether they are using a multiprocessor workstation, grid, cluster or an enterprise-class supercomputer.
Yet, parallel programming remains a black art beyond the capabilities of most scientists and engineers. It requires exotic programming in C or Fortran, and MPI (“Message Passing Interface” for inter-processor communication), typically done by a special programmer rather than the user who developed the original models and algorithms. It’s ironic … while computers have gotten better and more affordable, programming them remains decades behind (C is 30 years old, Fortran is 50 years old).
This stifling of technical computing is a recognized national problem. The last real innovation in parallel programming was the 1980s vectorizing compiler. There has since been precious little progress. No wonder that, in its 2005 report, the President’s Information Technology Advisory Committee stated that “… systems are difficult to program and their achieved performance is a small fraction of theoretical peak … new programming models and languages and high-level, more expressive tools must hide architectural details and parallelism.”
Fortunately, a new parallel programming model is emerging that bridges the best of both worlds — desktop and parallel high-performance computers. It lets scientists and engineers use their favorite, familiar desktop tools to do their work, but run the problems interactively on powerful parallel systems … without the need for programming priests or complex parallel coding. In other words, it lets you work in your preferred environment (“no change in religion”), hide the parallel programming challenges and more easily access parallel computing power. You can prototype in real time, with fine-grained control of both algorithms and data, transparently harnessing the parallel computing resources.
With this programming model, you can write just enough of the application to start testing with real data, as you incrementally refine the application. This interactive workflow can yield “time to first calculation” within minutes, rather than the several months or years required to first program the parallel application. It has the potential to dramatically accelerate scientific discovery like never before possible. Development of custom code will take just days or weeks rather than months or years. And high-performance computers will be used just as interactively as our desktop PCs are today.
Over the past year, several software vendors have entered the market with parallel solutions that bridge these worlds and the growing choice and competition is good news for end-users.
Get ready for the new world of personal supercomputing.