Title
Recurrent Neural Networks With Limited Numerical Precision.
Abstract
Recurrent Neural Networks (RNNs) produce state-of-art performance on many machine learning tasks but their demand on resources in terms of memory and computational power are often high. Therefore, there is a great interest in optimizing the computations performed with these models especially when considering development of specialized low-power hardware for deep networks. One way of reducing the computational needs is to limit the numerical precision of the network weights and biases. This has led to different proposed rounding methods which have been applied so far to only Convolutional Neural Networks and Fully-Connected Networks. This paper addresses the question of how to best reduce weight precision during training in the case of RNNs. We present results from the use of different stochastic and deterministic reduced precision training methods applied to three major RNN types which are then tested on several datasets. The results show that the weight binarization methods do not work with the RNNs. However, the stochastic and deterministic ternarization, and pow2-ternarization methods gave rise to low-precision RNNs that produce similar and even higher accuracy on certain datasets therefore providing a path towards training more efficient implementations of RNNs in specialized hardware.
Year
Venue
DocType
2016
arXiv: Neural and Evolutionary Computing
Journal
Volume
Citations 
PageRank 
abs/1608.06902
9
0.64
References 
Authors
11
5
Name
Order
Citations
PageRank
Joachim Ott190.64
Zhouhan Lin241917.51
Ying Zhang3472.89
Shih-chii Liu41005103.47
Yoshua Bengio5426773039.83