Title
Improving training time of deep neural networkwith asynchronous averaged stochastic gradient descent
Abstract
Deep neural network acoustic models have shown large improvement in performance over Gaussian mixture models (GMMs) in recent studies. Typically, stochastic gradient descent (SGD) is the most popular method for training deep neural networks. However, training DNN with minibatch based SGD is very slow. Because it requires frequent serial training and scanning the whole training set many passes before reaching the asymptotic region, making it difficult to scale to large dataset. Commonly, we can reduce training time from two aspects, reducing the epochs of training and exploring the distributed training algorithm. There are some distributed training algorithms, such as LBFGS, Hessian-free optimization and asynchronous SGD, have proven significantly reducing the training time. In order to further reduce the training time, we attempted to explore training algorithm with fast convergence and combined it with distributed training algorithm. Averaged stochastic gradient descent (ASGD) is proved simple and effective for one pass on-line learning. This paper investigates the asynchronous ASGD algorithm for deep neural network training. We tested asynchronous ASGD on the Mandarin Chinese recorded speech recognition task using deep neural networks. Experimental results show that the performance of one pass asynchronous ASGD is very close to that of multiple passes asynchronous SGD. Meanwhile, we can reduce the training time by a factor of 6.3.
Year
DOI
Venue
2014
10.1109/ISCSLP.2014.6936596
ISCSLP
Keywords
Field
DocType
one pass learning,deep neural network,gmm,deep neural network acoustic models,speech recognition,learning (artificial intelligence),asynchronous averaged stochastic gradient descent,lbfgs algorithm,one pass online learning,sgd method,mandarin chinese recorded speech recognition task,network training time,minibatch based sgd,hessian-free optimization algorithm,gradient methods,asynchronous averaged sgd,asynchronous sgd algorithm,neural nets,gaussian mixture model
Training set,Convergence (routing),Asynchronous communication,Stochastic gradient descent,Computer science,Speech recognition,Artificial intelligence,Artificial neural network,Mixture model,Deep neural networks,Machine learning
Conference
Citations 
PageRank 
References 
1
0.35
5
Authors
2
Name
Order
Citations
PageRank
Zhao You1679.39
Bo Xu224136.59