Title
Stop Wasting My Gradients: Practical SVRG
Abstract
We present and analyze several strategies for improving the performance of stochastic variance-reduced gradient (SVRG) methods. We first show that the convergence rate of these methods can be preserved under a decreasing sequence of errors in the control variate, and use this to derive variants of SVRG that use growing-batch strategies to reduce the number of gradient calculations required in the early iterations. We further (i) show how to exploit support vectors to reduce the number of gradient computations in the later iterations, (ii) prove that the commonly-used regularized SVRG iteration is justified and improves the convergence rate, (iii) consider alternate mini-batch selection strategies, and (iv) consider the generalization error of the method.
Year
Venue
Field
2015
Annual Conference on Neural Information Processing Systems
Mathematical optimization,Control variates,Algorithm,Rate of convergence,Generalization error,Mathematics,Computation
DocType
Volume
ISSN
Journal
abs/1511.01942
1049-5258
Citations 
PageRank 
References 
21
0.91
13
Authors
6
Name
Order
Citations
PageRank
Reza Harikandeh1221.95
Mohamed Osama Ahmed2221.27
Alim Virani3211.59
Mark Schmidt4262.35
Jakub Konecný536319.21
scott sallinen6413.01