Microsoft has announced the Microsoft
Technical Computing Initiative, a new effort and organization charged with
bringing supercomputing horsepower and resources to a much wider audience of
scientists, researchers and engineers.
The effort, which has been operating in "stealth mode" for
last 18 months, aims to simplify supercomputing capacity and help democratize
supercomputing for a broader set of users, according to Bill Hilf, Microsoft's
general manager, technical computing. The explosion of data, which is massively
outpacing computational growth, the notion of parallelism and the impact of
cloud computing are the macro trends prompting Microsoft to aggressively pursue
this new venture, Hilf says.
"What's been happening to make this more important to the
broader world is the end of the explosion of Moore's law, where we get faster
clock speeds every 18 months," Hilf explains. "The transition to multicore
systems has created a crucial need for the software community-everyone from
CAD/CAM providers to operating system vendors-to fundamentally rewrite software
to take advantage of parallelism."
The idea of breaking up a massive computing task or problem
such as a complex simulation and distributing the processing work across
resources, i.e., parallelism, applies to multicore desktop systems, cluster
computing environments, even the large-scale resources of cloud computing. The
problem is, as Hilf explains it, 99% of the world's software isn't written as a
parallel program, which is required to take advantage of the new architecture.
"Any time people need to break up and distribute a problem against large scale resources,
be it on a client, in a cluster or in the cloud--those are the environments
where we're focused on helping simplify and broaden the availability of
supercomputing," Hilf explains.
company is currently offering Windows HPC Server,
which delivers high-performance computing power on a cluster level and the
company is planning to release a new version this fall with more advanced
capabilities. For example, the new version will automatically bring unused
PC resources into the cluster during off-hours in the evening to tackle a
high-performance computing problem, Hilf explains.
its Azure cloud computing
platform to deliver High Performance Compute cycles in the cloud to
augment on-premise systems and deliver "just in time" processing.
new tools that will simplify parallel software development. Hilf says
parallel programs are extremely difficult to write, test and debug, and
Microsoft is committed to building new tools that will help automate and
simplify the process of writing parallel programs that will scale from the
desktop, to the cluster to the cloud. "The parallelism pressure has
already started in a significant way," he explains. "Simplified tools will
allow those serial developers to exploit parallel development. We see this
opening up those things that were previously only relegated to high-end
customers to a broader set of users."
new development tools and run-time platforms that will allow applications
or even engineering models to seamlessly scale from multicore PCs to
multi-server clusters to a multi-instance cloud environment, depending on
the need for compute power. This idea of flexibility and choice between
running in a client, cluster or cloud environment is where Microsoft
really sees its key differentiation, Hilf says. Consider the development
of an aircraft, for example. With the Microsoft High Performance Computing
vision, one or two variables involved in a wing design could be simulated
on a workstation, while the relationship of the wing to the entire
fuselage could be explored on the cluster and a simulation of the entire
aircraft across a huge amount of weather or physics data could leverage the
capacity of the cloud for computation. "To do that now, requires rewriting
the application at every step," Hilf says. In the Microsoft vision, the
parallel computing run-time platforms would allow the model or application
to take advantage of whatever resources beneath it without having to
change the application, he explains.
Truchard will be presented the award at the 2014 Golden Mousetrap Awards ceremony during the co-located events Pacific Design & Manufacturing, MD&M West, WestPack, PLASTEC West, Electronics West, ATX West, and AeroCon.
In a bid to boost the viability of lithium-based electric car batteries, a team at Lawrence Berkeley National Laboratory has developed a chemistry that could possibly double an EV’s driving range while cutting its battery cost in half.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.