Menu
Menu
Supercomputer on a chip

Supercomputer on a chip

Computer scientists at the University of Texas at Austin are inventing a radical microprocessor architecture, one that aims to solve some of the most vexing problems facing chip designers today. If successful, the Defense Department-funded effort could lead to processors of unprecedented performance and flexibility.

The density of transistors on a chip has doubled at least every two years for decades, and microprocessor designers have put those transistors to good use. Advanced circuits use techniques such as program branch prediction and speculative execution in order to build deep instruction "pipelines" that increase the throughput of the processor by allowing it to execute multiple instructions simultaneously. But the growing complexity of such circuits, and the heat they produce, signal an end to that approach. Rather than trying to build faster processor cores, chip builders are beginning to put more of them on a chip.

The problem with that, says Doug Burger, a computer science professor at the University of Texas, is that for application software to take advantage of those multiple cores, programmers must structure their code for parallel processing, and that's difficult or impossible for some applications. "The industry is running into a programmability wall, passing the buck to software and hoping the programmer will be able to write codes for their systems," he says.

Burger and his colleagues hope to solve these problems with a new microprocessor and instruction set architecture called Trips, or the Tera-op Reliable Intelligently Adaptive Processing System. "Our goal is to exploit concurrency, whether it's given to you by the programmer or not," he says.

Trips uses several techniques to do just that. First, the Trips compiler sends executable code to the hardware in blocks of up to 128 instructions. The processor "sees" and executes a block all at once, as if it were a single instruction, greatly decreasing the overhead associated with instruction handling and scheduling.

Second, instructions inside a block execute in a "data flow" fashion, meaning that each instruction executes as soon as its inputs arrive, rather than in some sequence imposed by the compiler or the programmer. "As such, the data is flowing through the instructions," explains Steve Keckler, a computer science professor and a Trips project co-leader with Burger.

Increasing Efficiency

Another trick: Within a block, the Trips compiler can merge two instructions that are on different paths into a single instruction if they have the same target and operation. Compared with earlier designs based on data flow concepts, "our aggressive data-flow model gives the compiler the opportunity to produce much tighter and more efficient code," says professor Kathryn McKinley, who heads the compiler portion of the Trips project.

Finally, data flow execution is enabled by "direct target encoding," by which the results from one instruction go directly to the next consuming instruction without being temporarily stored in a centralized register file. That further reduces processing overhead and speeds computation.

And compared with traditional methods for improving performance -- increasing processor clock speeds and building deeper pipelines -- the performance improvements enabled by these techniques come at a modest increase in power consumption.

The challenge of dealing with power consumption is forcing chip builders to move to multicore chips. Former Intel Corp. engineer Mark McDermott, now engineering vice president at Coherent Logix Inc. in Austin, says, "You look at something like Pentium, and there's a huge amount of control logic, control transistors that don't do any work -- they just consume power. Trips is trying to push some of that complexity back up into the compiler.

"Where Trips will really shine is in very, very high-performance data flow embedded computing, like software-defined radio," he says.

But, McDermott adds, "I don't know if it's a silver bullet yet. There's still a fair amount of research to be done."

According to its developers, Trips' data flow techniques work quite well with the three kinds of concurrency found in software -- instruction-level, thread-level and data-level parallelism. For that reason, Trips is said to be "polymorphous," meaning that it can perform well on widely differing types of applications -- scientific, commercial and embedded.

And that's exactly the quality sought by the Defense Advanced Research Projects Agency in its Polymorphous Computing Architectures project. DARPA, which is contributing US$15.4 million to Trips, is looking for a chip that is able to scale to 1 trillion sustained operations (tera-op) per second on many applications.

The university is about to deliver its Trips design to IBM, which will fabricate prototype chips and return them in February. The chips will have two processor cores, each able to execute 16 instructions simultaneously. Running at 500 MHz, the chips will perform 16 billion operations per second, Keckler says. The university will look to industry to commercialize the technology and meet DARPA's goal of offering 10-GHz chips capable of 1 tera-op by 2012, he says.

"We have an active interest in commercializing the technology, and we are looking for commercial partners," Burger says. Widespread commercial availability is what DARPA is hoping for as well. That plus polymorphism would mean the Pentagon could buy cheap, off-the-shelf chips for a fraction of what it now pays for exotic, custom-made processors for individual systems.

Chuck Moore, a senior fellow at Advanced Micro Devices Inc., says Trips offers a lot of promise. "The concepts are well aligned with the way code really behaves," he says. "The polymorphous aspects of Trips can enable it to do well on a wide variety of workloads."

One of the big challenges to becoming a mainstream commercial processor is compatibility with existing software and systems, especially x86 compatibility, Moore says. But one way to maintain compatibility would be to use Trips as a co-processor, he says. "The general-purpose [x86] processor could offload heavy tasks onto the co-processor while still handling legacy compatibility on its own."

Despite the promise of the Trips technology, Moore cautions, "on a marketing level, it is tricky to introduce radically new things. It seems like it will need to start in a specific niche and demonstrate the advantages there. Once it has been proven to be useful in some key market, it is more likely it can spread more broadly."

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.

More about Advanced Micro Devices Far EastDefense Advanced Research Projects AgencyHISIBM AustraliaIntelPLUSPromiseTera

Show Comments