Title
Stochastic Optimization with Bandit Sampling.
Abstract
Many stochastic optimization algorithms work by estimating the gradient of the cost function on the fly by sampling datapoints uniformly at random from a training set. However, the estimator might have a large variance, which inadvertently slows down the convergence rate of the algorithms. One way to reduce this variance is to sample the datapoints from a carefully selected non-uniform distribution. In this work, we propose a novel non-uniform sampling approach that uses the multi-armed bandit framework. Theoretically, we show that our algorithm asymptotically approximates the optimal variance within a factor of 3. Empirically, we show that using this datapoint-selection technique results in a significant reduction in the convergence time and variance of several stochastic optimization algorithms such as SGD, SVRG and SAGA. This approach for sampling datapoints is general, and can be used in conjunction with any algorithm that uses an unbiased gradient estimation -- we expect it to have broad applicability beyond the specific examples explored in this work.
Year
Venue
Field
2017
arXiv: Learning
Convergence (routing),Training set,Mathematical optimization,Stochastic optimization,On the fly,Rate of convergence,Sampling (statistics),Artificial intelligence,Mathematics,Machine learning,Gradient estimation,Estimator
DocType
Volume
Citations 
Journal
abs/1708.02544
0
PageRank 
References 
Authors
0.34
5
3
Name
Order
Citations
PageRank
Farnood Salehi143.77
L. Elisa Celis26514.72
Patrick Thiran32712217.24