Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Recent ISAT/DARPA Workshop Targeted Approximate Computing

June 23rd, 2014 / in big science, CCC, policy, Research News / by Ann Drobnis

The following is a special contribution to this blog by by CCC Executive Council Member Mark Hill and workshop organizers Luis Ceze, Associate Professor in the Department of Computer Science and Engineering at the University of Washington, and James Larus, Full Professor and Head of the School of Computer and Communication Sciences at the Ecole Polytechnique Federale de Lausanne

ApproximateLuis Ceze and Jim Larus organized a DARPA ISAT workshop on Approximate Computing in February, 2014. The goal was to discuss how to obtain 10-100x performance and similar improvements in MIPS/watt out of future hardware by carefully trading off accuracy of a com
putation for these other goals. The focus was not the underlying technology shifts, but rather the likely radical shifts required in hardware, software and basic computing systems properties to pervasively embrace accuracy trade-offs.

Below we provide more-detailed motivation for approximate computing, while the publicly-released slides are available here.

Given the end of Moore’s Law performance improvements and imminent end of Dennard scaling, it is imperative to find new ways to improve performance and energy efficiency of computer systems, so as to permit large and more complex problems to be tackled with constrained power envelopes, package sizes, and budgets. One promising approach is approximate computing, which relaxes the traditional digital orientation of precisely stated and verified algorithms reproducibly and correctly executed on hardware, in favor of approximate algorithms that produce “sufficiently” correct answers. The sufficiency criteria can either be a probabilistic one that results are usually correct, or it can be a more complex correctness criteria that the most “significant” bits of an answer are correct.

Approximation introduces another degree of freedom that can be used to improve computer system performance and power efficiency. For example, at one end of the spectrum of possible approximations, one can imagine computers whose circuit implementations employ aggressive voltage and timing optimizations that might introduce occasional non-deterministic errors. At another end of the spectrum, one can use analog computing techniques in select parts of the computation. One can also imagine entirely new ways of “executing” programs that are inherently approximate, e.g., what if we used neural networks to carry out “general” computations like browsing the web, running simulations, or doing search, sorting, and compression of data? Approximation opportunities go beyond just computation, since we can also imagine ways of storing data approximately that leads to potential retrieval errors, but is much denser, faster and energy efficient. Relaxing data communication is another possibility, since almost all forms of communication  (on-chip, off-chip, wireless, etc) use resources to guarantee data integrity, which is often unnecessary from the application point of view.

Obviously approximation is not a new idea, as it has been used in many areas such as lossy compression and numeric computation. However, these applications of the ideas were implemented in specific algorithms, which ran as part of a large system on a conventional processor. Much of the benefit of approximation may accrue from taking a broader systems perspective, for example by relaxing storage requirements for “approximate data”. But there has been little contemplation of what an approximate computer system would look like. What happens to the rest of the system when the processor evolves to support approximate computation? What is a programming model for approximate computation? What will programming languages and tools that directly support approximate computation look like? How do we prove approximate programs “correct”? Is there a composability model for approximate computing? How do we debug them? What will the system stack that supports approximate computing look like? How do we handle backward compatibility?

Recent ISAT/DARPA Workshop Targeted Approximate Computing

Comments are closed.