Abstract | ||
---|---|---|
In this paper, we study and analyze the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses. We provide a sublinear convergence rate (to stationary points) for general nonconvex functions and a linear convergence rate for gradient dominated functions, both of which have some advantages compared to other modern stochastic gradient algorithms for nonconvex losses. |
Year | Venue | Field |
---|---|---|
2017 | arXiv: Machine Learning | Sublinear function,Gradient method,Mathematical optimization,Algorithm,Loss minimization,Stationary point,Rate of convergence,Mathematics,Recursion |
DocType | Volume | Citations |
Journal | abs/1705.07261 | 6 |
PageRank | References | Authors |
0.46 | 5 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Lam M. Nguyen | 1 | 43 | 8.95 |
Jie Liu | 2 | 61 | 3.25 |
Katya Scheinberg | 3 | 744 | 69.50 |
Martin Takác | 4 | 752 | 49.49 |