I served as a panelist at the White House National Strategic Computing Initiative Workshop (NSCI) on October 20-21. I took away a number of points about the consensus of the group that I thought worth sharing with the broader community.
1) It is clear that CMOS is coming to an end. That was a striking consensus of the group, both on per-transistor costs and scaling. The semiconductor researchers are looking for extreme ultraviolet lithography (EUV) to come online, although they’ve been struggling with it for 10 years, still don’t have it working, and once it works, cost will still be an issue. It may never get there.
2) It is clear that there is no replacement for CMOS devices within the next ten years (also a strong consensus of the assembled group). None of the nearer-term devices under consideration look to have a significantly superior power/speed profile to CMOS, and those candidates that show significant theoretical improvements haven’t yet been demonstrated in the lab.
3) People view quantum as a good long-term bet for a disruptor. Both Microsoft and Google are now making major investments in that space. The group had concerns about whether quantum would produce killer apps (I, for one, do not share those concerns; I think a number of them have been identified).
4) No new computational models outside of quantum, neuromorphic and approximate computing were discussed. Approximate computing was viewed as having potential, but is unlikely to produce enough benefit to pick up the post-CMOS slack. (My own view on the neural space is that it’s incredibly important, but that neuromorphic (“brain inspired”) architectures are premature. The next five years will continue to produce large gains in non-neuromorphic machine learning (e.g. CNNs/DNNs/RNNs/LSTM). On the neuromorphic side, what we really need are advances in neural theory and increased understanding of the brain’s underlying computational mechanisms.)
5) There was a general concern among the attendees in how legacy codes can benefit in this “New Normal” of computing, how supercomputers can continue to scale, and how we can continue to increase the talent pool in the HPC community.
6) The scale and explosive growth rate of the cloud market was surprising to many of the attendees. The scale of the cloud market will enable investments in system design that are otherwise unaffordable. This shift may be analogous to how the growth in the PC market allowed the commodity (“killer micro”) CPUs to wipe out most of the supercomputing companies and custom vector machines starting in the early 1990s. The key question here is: will HPC systems have to shift to the cloud to be competitive?
7) There is going to be increasing complexity in systems as people specialize, move to heterogeneous systems, etc., in response to the slowing or end of Moore’s Law. The HPC community is really worried about the challenge of productivity in that environment. The NSCI program should provide significant support for academic research of novel prototype systems and stacks, since the New Normal will require risky and aggressive exploration down many paths. Given the large and growing scale of the cloud providers, our star systems researchers will need significant support to construct alternative systems that have eventual impact.