Title
Precision Gating: Improving Neural Network Efficiency with Dynamic Dual-Precision Activations
Abstract
We propose precision gating (PG), an end-to-end trainable dynamic dual-precision quantization technique for deep neural networks. PG computes most features in a low precision and only a small proportion of important features in a higher precision to preserve accuracy. The proposed approach is applicable to a variety of DNN architectures and significantly reduces the computational cost of DNN execution with almost no accuracy loss. Our experiments indicate that PG achieves excellent results on CNNs, including statically compressed mobile-friendly networks such as ShuffleNet. Compared to the state-of-the-art prediction-based quantization schemes, PG achieves the same or higher accuracy with 2.4× less compute on ImageNet. PG furthermore applies to RNNs. Compared to 8-bit uniform quantization, PG obtains a 1.2% improvement in perplexity per word with 2.7× computational cost reduction on LSTM on the Penn Tree Bank dataset.
Year
Venue
Keywords
2020
ICLR
deep learning, neural network, dynamic quantization, dual precision, efficient gating
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
20
6
Name
Order
Citations
PageRank
Yichi Zhang100.34
Ritchie Zhao21348.19
Weizhe Hua3235.58
Nayun Xu401.01
G. Edward Suh52721208.03
Zhiru Zhang6102071.74