Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Multi-core and Parallel Programming: Is the Sky Falling?

November 17th, 2008 / in research horizons / by Peter Lee

In previous posts on this blog, Berkeley’s David Patterson, Intel’s Andrew Chien, and Microsoft’s Dan Reed presented their views on why research advances are needed to overcome the problems posed by multicore processors. In this piece — the fourth (and possibly final) entry in the series -– Marc Snir from UIUC argues that there are major challenges facing us but yet, the sky is not falling.

The CCC blog has published a couple of articles on the multi-core challenge, all emphasizing the difficulty of making parallel programming prevalent and, hence, the difficulty of leveraging multi-core systems in mass markets. The challenge is, indeed, significant and requires important investments in research and development; but, at UPCRC Illinois, we do not believe that the sky is falling.

Parallel programming, as currently practiced, is hard: Programs, especially shared memory programs, are prone to subtle, hard-to-find synchronization bugs and parallel performance is elusive. One can reach two possible conclusions from this situation: It is possible that parallel programming is inherently hard, in which case, indeed the sky is falling. An alternative view is that, intrinsically, parallel programming is not significantly harder than sequential programming; rather, it is hampered by the lack of adequate languages, tools and architectures.  In this alternative view, different practices, supported by the right infrastructure, can make parallel programming prevalent.

This alternative, optimistic view is based on many years of experience with parallel programming. While some concurrent code, e.g., OS code, is often hard to write and debug, there are many forms of parallelism that are relatively easy to master: Many parallel scientific codes are written by scientists with limited CS education; the time spent handling parallelism is a small fraction of the time spent developing a large parallel scientific code. Parallelism can be hidden behind an SQL interface and exploited by programmers with little difficulty. Many programmers develop GUI’s that are, in effect, parallel programs, using specialized frameworks. Parallelism can be exposed using scripted object systems such as Squeak Etoys in ways that enable young children to write parallel programs. These examples seem to indicate that it is not parallelism per se that is hard to handle; rather it is the unstructured, unconstrained interaction between concurrent threads that result in code that is hard to understand both from a correctness and performance view, hence hard to debug and tune.

The state-of-the-art in parallel programming is what sequential computing was several decades ago. A major reason for this situation is that parallel programming has been an art exercised by a group of experts whose small population did not justify major investments in programming environments aimed at making their life easier. This reason disappears as parallelism becomes available on all platforms. Furthermore, we can make faster progress now because we understand well the principles it takes to make programming easier — principles such as safety, encapsulation, modularity, or separation of concerns; we also have more experience in developing sophisticated IDE’s.

What will it take to bring these principles of computer science to parallel programming? It will require a broad based attack across the system stack. As has been said in these blogs, we need research in languages, compilers, runtime, libraries, tools, hardware … What has not been said explicitly is that none of these areas are likely to produce the silver bullet on their own. The solution that will work eventually will be one that brings together technologies from all these areas to bear on each other. However, we do not have the luxury of doing this via incremental and reactive changes over decades. The research truly needs to be interdisciplinary and the idea of co-design needs to be internalized. Unfortunately, the mainstream systems community has all but abandoned this mode of research in the last several years. Language researchers are locked into mechanisms that will only be supported by commodity hardware and hardware researchers are locked into a mode that requires supporting the lowest common denominator software. It is imperative that we break out of these shells and get the research community into a mindset that we are truly looking to define a new age of computing — a mindset that nurtures research where a clean system slate is an acceptable starting point.

The sky is not falling, but the ground is shifting rapidly. The multi-core challenge requires a concerted effort of academia and industry to generate new capabilities. We are confident that in the future, as in the past, new capabilities will breed new applications. Multi-core parallelism can be leveraged to develop human-centered consumer products that provide more intelligent and more intuitive interfaces through better graphics and vision, better speech and text processing and better modeling of the user and the environment.

The task of providing better performance is shifting from the hardware to the software. This is an exciting time for Computer Science.

Marc Snir
4323 Siebel Center, 201 N Goodwin, IL 61801
Tel (217) 244 6568
Web http://www.cs.uiuc.edu/homes/snir

Multi-core and Parallel Programming: Is the Sky Falling?