Title
Policy Optimization with Stochastic Mirror Descent.
Abstract
Improving sample efficiency has been a longstanding goal in reinforcement learning. This paper proposes VRMPO algorithm: a sample efficient policy gradient method with stochastic mirror descent. In VRMPO, a novel variance-reduced policy gradient estimator is presented to improve sample efficiency. We prove that the proposed VRMPO needs only O(ε−3) sample trajectories to achieve an ε-approximate first-order stationary point, which matches the best sample complexity for policy optimization. Extensive empirical results demonstrate that VRMP outperforms the state-of-the-art policy gradient methods in various settings.
Year
Venue
Keywords
2022
AAAI Conference on Artificial Intelligence
Machine Learning (ML)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Long Yang122.08
Yu Zhang201.01
Gang Zheng301.35
Qian Zheng400.68
Pengfei Li531.71
Jianhang Huang610.73
Gang Pan701.35