Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.

Multicore: It’s the Software

October 7th, 2008 / in research horizons / by Peter Lee

In previous posts on this blog, Berkeley’s David Patterson and Intel’s Andrew Chien presented their views on why research advances are needed to overcome the problems posed by multicore processors. In this piece — the third in a series -– Microsoft’s Dan Reed gives us his views on some of the potential benefits of progress in this research area.

For over thirty years, we have watched the great cycle of innovation defined by the commodity hardware/software ecosystem — faster processors enable software with new features and capabilities that in turn require faster processors, which beget new software. The great wheel has turned, but it no more, as power constraints and device physics now limit the performance achievable with single microprocessors. Multicore chips — those with multiple, lower power processors per chip — are now the norm. Moreover, current multicore chips (those with 4-8 cores/chip) are but the beginning. We can expect hundreds of cores per chip in the future, with diverse functionality (graphics, packet protocol processing, DSP, cryptography and other features).

The software research challenge is clear — developing effective programming abstractions and tools that hide the diversity of multicore chips and features while exploiting their performance for important applications. Hence, we need a vibrant community of researchers exploring diverse approaches to parallel programming — languages, libraries, compilers, tools — and their applicability to multiple application domains.

Microsoft researchers are investigating all of these approaches, from coordination languages for robots and distributed systems to mobile phones to desktops and data center clouds. To engage the academic community, Microsoft funds multicore research projects and many sites, and we have partnered with Intel to fund the Universal Parallel Computing Research Centers (UPCRCs) at the University of California at Berkeley and the University of Illinois at Urbana-Champaign.

As Richard Hamming famously noted, “The purpose of computing is insight, not numbers.” In that spirit, I believe our research challenge is to break free from the limitations of the desktop metaphor and exploit the ever greater performance of multicore chips to create new human-computer interaction metaphors that are more natural and intuitive. This will require new approaches to parallel computing education and increased collaboration with researchers in application domains.

As an example, consider one possible future — “spatial computing” — where real-time vision and speech processing, coupled with knowledge bases, distributed sensors and responsive objects, enhance human activities in contextually relevant ways while remaining otherwise unobtrusive. Such an infosphere would adapt to its user’s needs and behavior and move seamlessly across home, work and play.

Multicore brings enormously interesting intellectual challenges and the opportunity to rethink much of how we approach computing.  Let’s embrace the opportunity!

Daniel Reed is Microsoft’s Scalable and Multicore Computing Strategist and a member of the President’s Council of Advisors on Science and Technology (PCAST). Contact him at or his blog at

Multicore: It’s the Software


  1. hwright says:

    Ken Strandberg wrote an interesting three part series about how to approach development of future Terascale-on-a-Chip over at the Intel Multi-core blog. He predicts “a near future where applications can run on hundreds of cores, processing terabytes of data per second using a single processor.”

    His series can be found here:


  2. Louis Savain says:


    The multicore research community is just spinning its wheels. The solution to the parallel programming crisis has been around for decades. The academic research community is blind to it because of its enfatuation with Turing machines and multithreading. There is a way to design and program parallel computers that does not involve the use of threads at all. It is a method that programmers have been using to simulate parallelism in such applications as neural networks, cellular automata, simulations, video games and even VHDL. It is not rocket science.

  3. Dan,

    Because of the von Neumann syndrome many von-Neumann-only cores cannot be the solution because of extreme inefficiency. Hetero is the way to go. Massive software to configware migration is the only possible way to maintain growth of performance – replacing the ending free ride on Moore’s law.

  4. Andras says:

    Isn’t the whole thing around multi-core programming a bit of hype?!
    If you avoid shared state, you can fall back to asynchronously communicating many-threaded approaches which proved to be a robust approach in many fields; if you use shared state, you may fix it with locks, TM etc – but you will face scaling issues.

  5. Chitoor V. Srinivasan says:

    My company, EDSS, Inc. has developed a parallel programming paradigm that is ideally well suited to build formally verified parallel software that may run in a diversity of multi-core chips. Abstract of a manuscript on this is given below.

    Abstract: This paper introduces a way of implementing parallel programs which can be formally verified. It uses ideas from OO-programming, Pi-Calculus, CSP, and Actor systems, together with a new way of organizing communications among parallel processes. The interesting feature is, it allows programs to be developed from an initial abstract statement of interactions among parallel computing units, called cells, and progressively refine them to their final implementation. At each stage of refinement a formal description of patterns of events that computations generate is derived automatically from implementation specifications. This formal description is used for two purposes: The first is to prove properties of the implementation, such as correctness, progress, mutual exclusion, and freedom from deadlocks/livelocks, etc, stated in a CTL-language. The second is to automatically incorporate into each application implementation, a Self-Monitoring System (SMS) that constantly monitors the application in parallel while it is running, throughout its life time, with little or no interference with its timings, in order to identify errors in performance, pending errors, and patterns of critical behavior, and generate timely reports.
    The message passing paradigm is called TICC™ and the Parallel Program Development and Execution platform is called TICC™-Ppde. A prototype of TICC™ and TICC™-Ppde without the formal proof methods but with the infrastructure for SMS has been implemented and tested for parallel program development and execution.
    TICC™-Ppde requires and can efficiently use large numbers of CPUs. The programming abstractions and tools it provides are ideally well suited to develop parallel software that may run in a diversity of multicore chips with integrated TICCNET™, fully exploiting their performance capabilities.
    This paper introduces through a series of examples, principles of program organization, automatic derivation of models from implementations and model based proof generation, and defines mechanisms that implement dynamic model based SMS. Part II defines the denotational semantics and proof theory.

    Those interested please contact the author at the above email address:


Trackbacks /

  1. Multicore: It’s the Software