Title
Sample Complexity Bounds For Two Timescale Value-Based Reinforcement Learning Algorithms
Abstract
Two timescale stochastic approximation (SA) has been widely used in value-based reinforcement learning algorithms. In the policy evaluation setting, it can model the linear and nonlinear temporal difference learning with gradient correction (TDC) algorithms as linear SA and nonlinear SA, respectively. In the policy optimization setting, two timescale nonlinear SA can also model the greedy gradient-Q (Greedy-GQ) algorithm. In previous studies, the non-asymptotic analysis of linear TDC and Greedy-GQ has been studied in the Markovian setting, with diminishing or accuracy-dependent stepsize. For the nonlinear TDC algorithm, only the asymptotic convergence has been established. In this paper, we study the non-asymptotic convergence rate of two timescale linear and nonlinear TDC and Greedy-GQ under Markovian sampling and with accuracy-independent constant stepsize. For linear TDC, we provide a novel non-asymptotic analysis and show that it attains an epsilon-accurate solution with the optimal sample complexity of O(epsilon(-1) log(1/epsilon)) under a constant stepsize. For nonlinear TDC and Greedy-GQ, we show that both algorithms attain epsilon-accurate stationary solution with sample complexity O(epsilon(-2)). It is the first non-asymptotic convergence result established for nonlinear TDC under Markovian sampling and our result for Greedy-GQ outperforms the previous result orderwisely by a factor of O(epsilon(-1) log(1/epsilon)).
Year
Venue
DocType
2021
24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS)
Conference
Volume
ISSN
Citations 
130
2640-3498
0
PageRank 
References 
Authors
0.34
0
2
Name
Order
Citations
PageRank
Tengyu Xu115.75
Yingbin Liang21646147.64