Title
Stochastic Variance Reduction Methods for Policy Evaluation.
Abstract
Policy evaluation is a crucial step in many reinforcement-learning procedures, which estimates a value function that predicts statesu0027 long-term value under a given policy. In this paper, we focus on policy evaluation with linear function approximation over a fixed dataset. We first transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle point problem, and then present a primal-dual batch gradient method, as well as two stochastic variance reduction methods for solving the problem. These algorithms scale linearly in both sample size and feature dimension. Moreover, they achieve linear convergence even when the saddle-point problem has only strong concavity in the dual variables but no strong convexity in the primal variables. Numerical experiments on benchmark problems demonstrate the effectiveness of our methods.
Year
Venue
DocType
2017
ICML
Conference
Volume
Citations 
PageRank 
abs/1702.07944
9
0.44
References 
Authors
17
5
Name
Order
Citations
PageRank
Simon Du121029.79
Jianshu Chen288352.94
Lihong Li32390128.53
Xiao, Lin491853.00
Dengyong Zhou5170965.63