Title
Low Precision RNNs: Quantizing RNNs Without Losing Accuracy.
Abstract
Similar to convolution neural networks, recurrent neural networks (RNNs) typically suffer from over-parameterization. Quantizing bit-widths of weights and activations results in runtime efficiency on hardware, yet it often comes at the cost of reduced accuracy. This paper proposes a quantization approach that increases model size with bit-width reduction. This approach will allow networks to perform at their baseline accuracy while still maintaining the benefits of reduced precision and overall model size reduction.
Year
Venue
Field
2017
arXiv: Learning
Convolution,Recurrent neural network,Size reduction,Artificial intelligence,Quantization (signal processing),Artificial neural network,Machine learning,Mathematics
DocType
Volume
Citations 
Journal
abs/1710.07706
1
PageRank 
References 
Authors
0.36
5
3
Name
Order
Citations
PageRank
Supriya Kapur110.36
Asit K. Mishra2121646.21
Debbie Marr317512.39