Title
Quasi-Newton Methods for Deep Learning: Forget the Past, Just Sample.
Abstract
We present two sampled quasi-Newton methods for deep learning: sampled LBFGS (S-LBFGS) and sampled LSR1 (S-LSR1). Contrary to the classical variants of these methods that sequentially build Hessian or inverse Hessian approximations as the optimization progresses, our proposed methods sample points randomly around the current iterate at every iteration to produce these approximations. As a result, the approximations constructed make use of more reliable (recent and local) information, and do not depend on past iterate information that could be significantly stale. Our proposed algorithms are efficient in terms of accessed data points (epochs) and have enough concurrency to take advantage of parallel/distributed computing environments. We provide convergence guarantees for our proposed methods. Numerical tests on a toy classification problem as well as on popular benchmarking neural network training tasks reveal that the methods outperform their classical variants and are competitive with state-of-the-art first-order methods such as ADAM.
Year
Venue
DocType
2019
arXiv: Optimization and Control
Journal
Volume
Citations 
PageRank 
abs/1901.09997
0
0.34
References 
Authors
33
3
Name
Order
Citations
PageRank
Albert S. Berahas1214.05
Majid Jahani201.01
Martin Takác375249.49