Title
A Variance Reduced Stochastic Newton Method.
Abstract
Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performance on a wide variety of tasks and enjoy super-linear convergence to the optimal solution. For large-scale learning problems, stochastic Quasi-Newton methods have been recently proposed. However, these typically only achieve sub-linear convergence rates and have not been shown to consistently perform well in practice since noisy Hessian approximations can exacerbate the effect of high-variance stochastic gradient estimates. In this work we propose Vite, a novel stochastic Quasi-Newton algorithm that uses an existing first-order technique to reduce this variance. Without exploiting the specific form of the approximate Hessian, we show that Vite reaches the optimum at a geometric rate with a constant step-size when dealing with smooth strongly convex functions. Empirically, we demonstrate improvements over existing stochastic Quasi-Newton and variance reduced stochastic gradient methods.
Year
Venue
Field
2015
CoRR
Convergence (routing),Mathematical optimization,Stochastic optimization,Hessian matrix,Regular polygon,Convex function,Loss minimization,Mathematics,Exponential growth,Newton's method
DocType
Volume
Citations 
Journal
abs/1503.08316
6
PageRank 
References 
Authors
0.43
10
3
Name
Order
Citations
PageRank
Aurelien Lucchi1241989.45
Brian McWilliams282.81
Thomas Hofmann3308.97