Title
Accelerating Hessian-Free Optimization For Deep Neural Networks By Implicit Preconditioning And Sampling
Abstract
Hessian-free training has become a popular parallel second order optimization technique for Deep Neural Network training. This study aims at speeding up Hessian-free training, both by means of decreasing the amount of data used for training, as well as through reduction of the number of Krylov subspace solver iterations used for implicit estimation of the Hessian. In this paper, we develop an L-BFGS based preconditioning scheme that avoids the need to access the Hessian explicitly. Since L-BFGS cannot be regarded as a fixed-point iteration, we further propose the employment of flexible Krylov subspace solvers that retain the desired theoretical convergence guarantees of their conventional counterparts. Second, we propose a new sampling algorithm, which geometrically increases the amount of data utilized for gradient and Krylov subspace iteration calculations. On a 50-hr English Broadcast News task, we find that these methodologies provide roughly a 1.5x speed-up, whereas, on a 300-hr Switchboard task, these techniques provide over a 2.3x speedup, with no loss in WER. These results suggest that even further speed-up is expected, as problems scale and complexity grows.
Year
DOI
Venue
2013
10.1109/ASRU.2013.6707747
2013 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU)
Keywords
Field
DocType
neural nets,learning artificial intelligence,iterative methods,speech recognition
Krylov subspace,Convergence (routing),Broadcasting,Mathematical optimization,Computer science,Hessian matrix,Artificial intelligence,Sampling (statistics),Solver,Artificial neural network,Machine learning,Speedup
Conference
Citations 
PageRank 
References 
2
0.42
9
Authors
5
Name
Order
Citations
PageRank
Tara N. Sainath13497232.43
Lior Horesh2226.04
B. Kingsbury34175335.43
Aleksandr Y. Aravkin425232.68
Bhuvana Ramabhadran51779153.83