Title
High-Accuracy Low-Precision Training.
Abstract
Low-precision computation is often used to lower the time and energy cost of machine learning, and recently hardware accelerators have been developed to support it. Still, it has been used primarily for inference - not training. Previous low-precision training algorithms suffered from a fundamental tradeoff: as the number of bits of precision is lowered, quantization noise is added to the model, which limits statistical accuracy. To address this issue, we describe a simple low-precision stochastic gradient descent variant called HALP. HALP converges at the same theoretical rate as full-precision algorithms despite the noise introduced by using low precision throughout execution. The key idea is to use SVRG to reduce gradient variance, and to combine this with a novel technique called bit centering to reduce quantization error. We show that on the CPU, HALP can run up to $4 \times$ faster than full-precision SVRG and can match its convergence trajectory. We implemented HALP in TensorQuant, and show that it exceeds the validation performance of plain low-precision SGD on two deep learning tasks.
Year
Venue
DocType
2018
CoRR
Journal
Volume
Citations 
PageRank 
abs/1803.03383
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Christopher De Sa1386.33
Megan Leszczynski221.74
Jian Zhang344415.83
Alana Marzoev400.34
Ré Christopher53422192.34
Kunle Olukotun64532373.50
Christopher Ré7224.96