Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Accelerating Accelerating Artificial Neural Networks at ISCA 2016

June 27th, 2016 / in CCC, Research News / by Helen Wright

Screen Shot 2016-06-27 at 2.21.43 PMThe following is a special contribution to this blog by CCC Executive Council Member Mark D. Hill of the University of Wisconsin-Madison.

Even with the slowing of Moore’s Law and the end of Dennard scaling, computer chips can still get dramatically better performance—without dramatically more power—by using specialized “accelerator” blocks to perform key tasks much faster (> 100x) and/or at lower power. Classic accelerators include floating-point hardware (a separately chip back in the days of the Intel 8087), graphics processing units (GPUs), and field-programmable gate arrays (FPGAs).

The recent explosion in the progress and importance of deep learning makes artificial neural networks a promising target for hardware acceleration. To this end, at least NINE papers at the recent International Symposium on Computer Architecture (ISCA 2016) in Seoul, Korea, targeted neural network acceleration through reducing the cost of computation, storage, and/or communication. Ideas included eliminating zeros (like sparse matrices), using low-precision or even analog values, exploiting emerging memory technologies, incorporating predication and pruning, minimizing data movement, leveraging 3D die stacking, and developing a new instruction set architecture. To find these papers, please see the Main Program and look for creatively-named sessions: Neural Networks 1, Neural Networks 2, and Neural Networks 3.

While it is hard to tell which (combination of) ideas with ultimately prevail, this burst of parallel activities will likely accelerate when we all can use neural network accelerators, including accelerators in the cloud that we don’t even know we are using.
Accelerating Accelerating Artificial Neural Networks at ISCA 2016

Comments are closed.