Title
OptQuant: Distributed training of neural networks with optimized quantization mechanisms.
Abstract
Nowadays, it has been a common practice to speed up the training of deep neural networks by utilizing multiple computational nodes. It is non-trivial to achieve desirable speed-up due to the potentially large communication overhead in distributed training. To reduce the communication cost, several unbiased random quantization mechanisms were proposed, in which the local workers, i.e., computational nodes, quantize their local gradients before communications with other workers. Most previous quantization mechanisms are static, i.e., the gradients are quantized in the same way during the training process. However, for different neural network models, the distributions of gradients might be very different even after normalization. To minimize the quantization loss better, we design the parameterized unbiased quantization mechanisms and dynamically optimize the quantization mechanism during the training process, using the aggregated information of the gradients. We call the distributed deep learning algorithms with our new quantization method (unbiased) OptQuant algorithms. Theoretically, we show that the unbiased OptQuant algorithms converge faster than the static unbiased quantization. In addition, if we trade-off the bias and the variance in the quantization, the algorithm converges faster. Motivated by this theoretical result, we further design the parameterized biased quantization mechanisms and the biased OptQuant algorithms. We evaluate our algorithms for different deep neural networks with benchmark datasets. Experimental results indicate that the OptQuant algorithms train the neural network models faster than previous quantization algorithms and much faster than the float version.
Year
DOI
Venue
2019
10.1016/j.neucom.2019.02.049
Neurocomputing
Keywords
Field
DocType
Distributed machine learning,Deep learning,Stochastic gradient descent,Gradient quantization
Parameterized complexity,Normalization (statistics),Pattern recognition,Algorithm,Quantization (physics),Artificial intelligence,Deep learning,Artificial neural network,Quantization (signal processing),Deep neural networks,Mathematics,Speedup
Journal
Volume
ISSN
Citations 
340
0925-2312
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Li He100.68
Shuxin Zheng243.10
Wei Chen3231.84
Zhi-Ming Ma422718.26
Tie-yan Liu54662256.32