Archive for August, 2008


The Multicore Challenge

August 26th, 2008

Researchers working in areas spanning computer architecture, programming languages, operating systems, algorithms, and more have been thinking harder about the problem of parallel computing. Why has the age-old concept of parallelism become so “hot” today? To provide the first of an upcoming series of opinion pieces, we asked David Patterson, Professor in Computer Science at UC Berkeley, to give us his thoughts, and the rationale for increased government funding to solve the multicore challenge.

Since the first commercial computer in 1950, the information technology industry has improved cost-performance of computing by about 100 billion overall. For most of the last 20 years, architects used the rapidly increasing transistor speed and budget made possible by silicon technology advances to double performance every 18 months. The implicit hardware/software contract was that increases in transistor count and power dissipation were OK as long as architects maintained the existing programming model. This contract led to innovations that were inefficient in transistors and power but which increased performance. This contract worked fine until we hit the power limit that a chip could dissipate.

Computer architects were forced to find a new paradigm to sustain ever-increasing performance. The industry decided that only viable option was to replace the single power-inefficient processor by several more efficient processors on the same chip. The whole microprocessor industry thus declared that its future was in parallel computing, with a doubling of the number of processors or cores each technology generation, which occur every two years. This style of chip was labeled a multicore microprocessor. Hence, the leap to multicore is not based on a breakthrough in programming or architecture; it’s actually a retreat from the even harder task of building power-efficient, high-clock-rate, single-core chips.

Many startups tried commercializing multiple core hardware over the years. They all failed, as programmers accustomed to continuous improvements in sequential performance saw little need to explore parallelism. Convex, Encore, Floating Point Systems, INMOS, Kendall Square Research, MasPar, nCUBE, Sequent, and Thinking Machines are just the best-known members of the Dead Parallel Computer Society, whose ranks are legion. Given this sad history, there is plenty of reason for pessimism about the future of multicore. Quoting computing pioneer and Stanford President John Hennessy:

“…when we start talking about parallelism and ease of use of truly parallel computers, we’re talking about a problem that’s as hard as any that computer science has faced. … I would be panicked if I were in industry.”

Jeopardy for the IT industry means opportunity for the research community. If researchers meet the parallel challenge, the future of IT is rosy. If they don’t, it’s not. Failure could jeopardize both the IT field and the portions of the economy that depend upon rapidly improving information technology. It is also an opportunity for the leadership in IT to move from the US to wherever in the world someone invents the solution to make it easy to write efficient parallel software.

Given this current crisis, its ironic that since 2001 DARPA chose to decrease funding of academic research in computer systems research. Knowing what we know today, if we could go back in time we would have launched a Manhattan Project to bring together the best minds in applications, software architecture, programming languages and compilers, libraries, testing and correctness, operating systems, hardware architecture, and chip design to tackle this parallel challenge.

Since we don’t have time travel, there is an even greater sense of urgency to get such an effort underway. Indeed, industry has recently stepped in to fund three universities to get underway–Berkeley, Illinois, and Stanford–but its unrealistic to expect industry to fund many more. Its also clear given the urgency and importance to the industry and the nation, we can’t depend on just three academic projects to preserve the future of the US IT industry. We need the US Government to return to its historic role to bring the many more minds on these important problem. To make real progress, we would need a long-term, multi-hundred million dollar per year program.

The consequences of not funding aren’t a drop in Nobel prizes or research breakthroughs; its a decline in the US-led IT industry, a slowdown in portions of the US economy, and possibly ceding the leadership in IT to another part of the world were governments understand the potential economic impact of funding academic IT research on parallelism.

David Patterson