Title
Computation Reuse in DNNs by Exploiting Input Similarity.
Abstract
In recent years, Deep Neural Networks (DNNs) have achieved tremendous success for diverse problems such as classification and decision making. Efficient support for DNNs on CPUs, GPUs and accelerators has become a prolific area of research, resulting in a plethora of techniques for energy-efficient DNN inference. However, previous proposals focus on a single execution of a DNN. Popular applications, such as speech recognition or video classification, require multiple back-to-back executions of a DNN to process a sequence of inputs (e.g., audio frames, images). In this paper, we show that consecutive inputs exhibit a high degree of similarity, causing the inputs/outputs of the different layers to be extremely similar for successive frames of speech or images of a video. Based on this observation, we propose a technique to reuse some results of the previous execution, instead of computing the entire DNN. Computations related to inputs with negligible changes can be avoided with minor impact on accuracy, saving a large percentage of computations and memory accesses. We propose an implementation of our reuse-based inference scheme on top of a state-of-the-art DNN accelerator. Results show that, on average, more than 60% of the inputs of any neural network layer tested exhibit negligible changes with respect to the previous execution. Avoiding the memory accesses and computations for these inputs results in 63% energy savings on average.
Year
DOI
Venue
2018
10.1109/ISCA.2018.00016
ISCA
Keywords
Field
DocType
DNN, Computation Reuse, Input Similarity, Hardware Accelerator
Reuse,Efficient energy use,Computer science,Inference,Parallel computing,Network architecture,Hardware acceleration,Artificial neural network,Memory architecture,Computation
Conference
ISSN
ISBN
Citations 
1063-6897
978-1-5386-5984-7
8
PageRank 
References 
Authors
0.53
28
3
Name
Order
Citations
PageRank
Marc Riera1222.54
Jose-Maria Arnau2929.15
Antonio González33178229.66