Title
An accelerated variance reducing stochastic method with Douglas-Rachford splitting
Abstract
We consider the problem of minimizing the regularized empirical risk function which is represented as the average of a large number of convex loss functions plus a possibly non-smooth convex regularization term. In this paper, we propose a fast variance reducing (VR) stochastic method called Prox2-SAGA. Different from traditional VR stochastic methods, Prox2-SAGA replaces the stochastic gradient of the loss function with the corresponding gradient mapping. In addition, Prox2-SAGA also computes the gradient mapping of the regularization term. These two gradient mappings constitute a Douglas-Rachford splitting step. For strongly convex and smooth loss functions, we prove that Prox2-SAGA can achieve a linear convergence rate comparable to other accelerated VR stochastic methods. In addition, Prox2-SAGA is more practical as it involves only the stepsize to tune. When each loss function is smooth but non-strongly convex, we prove a convergence rate of $${\mathcal {O}}(1/k)$$ for the proposed Prox2-SAGA method, where k is the number of iterations. Moreover, experiments show that Prox2-SAGA is valid for non-smooth loss functions, and for strongly convex and smooth loss functions, Prox2-SAGA is prominently faster when loss functions are ill-conditioned.
Year
DOI
Venue
2019
10.1007/s10994-019-05785-3
Machine Learning
Keywords
Field
DocType
Variance reduction (VR), Acceleration, Douglas-Rachford splitting, Proximal operator, Gradient mapping
Applied mathematics,Pattern recognition,Regular polygon,Convex function,Regularization (mathematics),Artificial intelligence,Rate of convergence,Acceleration,Risk function,Mathematics
Journal
Volume
Issue
ISSN
108
5
0885-6125
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Jingchang Liu100.68
Linli Xu279042.51
Shuheng Shen351.48
Qing Ling496860.48