Breaking the Next Flop Barrier

DN Staff

July 18, 2005

2 Min Read
Breaking the Next Flop Barrier

Cray Inc. is establishing its supercomputer architectures for the remainder of the decade, planning to devise systems that are more efficient and easier to link together. The systems should boost performance beyond the petaflop (floating point operations) range by the end of the decade.

The Seattle-based company recently inked an agreement with the U.S. Government to continue developing a next-generation supercomputer code-named Black Widow. Cray and the Government will each invest about $17M through 2007.

Black Widow is expected to reach a peak performance of several hundred teraflops initially, exceeding a petaflop (a thousand trillion calculations per sec) in its product lifetime.

Looking further out, Cray is working closely with DARPA to continue developing advanced systems. These efforts, designed to yield products by 2010, will build upon work done under the existing cooperative agreement.

The later phase will create a merger of Cray's proprietary vector systems and its scalar technologies. "We want to integrate them so we can get systems with different sorts of computation capability," says Steve Scott, CTO at Cray.

Performance increases

DARPA and partners are addressing issues that plague designers today. "High end computing systems don't scale well when they're put in clusters, and they tend to be fragile, with a lot of reliability issues," Scott says.

The performance target is petaflops and multiple petaflops. Advances will come in many areas with a similar focus. "The common theme is bandwidth. Bandwidth is not only the most important aspect, it's the most expensive," Scott says.

As processor speeds increase, the ability to get data from memory is a bottleneck that grows at around 50 percent per year.

FLOPS are becoming cheaper, following Moore's Law of doubling every 18 months, but bandwidth increases are far slower, he explains. One solution for that problem is to use bandwidth wisely. "We want to reduce the use of bandwidth, with fast processors we want to pull data in and operate on it, using it several times in that processor," Scott says.

Bandwidth has a major impact on memory access. "Memory capacity is getting cheaper and cheaper at Moore's Law rates, but the bandwidth to get to memory is not getting faster at that rate," Scott says.

Another approach is to distribute processors. "Rather than send data to processors, we'll sprinkle processors out in the memory subsystem, performing operations on data where it sits to reduce the amount of data that gets sent across networks," Scott says. That provides the same result as increasing bandwidth, he adds.

Sign up for the Design News Daily newsletter.

You May Also Like