Abstract | ||
---|---|---|
Existing convergence analyses of Q-learning mostly focus on the vanilla stochastic gradient descent (SGD) type of updates. Despite the Adaptive Moment Estimation (Adam) has been commonly used for practical Q-learning algorithms, there has not been any convergence guarantee provided for Q-learning with such type of updates. In this paper, we first characterize the convergence rate for Q-AMSGrad, which is the Q-learning algorithm with AMSGrad update (a commonly adopted alternative of Adam for theoretical analysis). To further improve the performance, we propose to incorporate the momentum restart scheme to Q-AMSGrad, resulting in the so-called Q-AMSGradR algorithm. The convergence rate of Q-AMSGradR is also established. Our experiments on a linear quadratic regulator problem show that the two proposed Q-learning algorithms outperform the vanilla Q-learning with SGD updates. The two algorithms also exhibit significantly better performance than the DQN learning method over a batch of Atari 2600 games. |
Year | DOI | Venue |
---|---|---|
2020 | 10.24963/ijcai.2020/422 | IJCAI 2020 |
DocType | ISSN | Citations |
Conference | Proceedings of the Twenty-Ninth International Joint Conference
IJCAI20 (2020) 3051-3057 | 0 |
PageRank | References | Authors |
0.34 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bowen Weng | 1 | 1 | 3.07 |
Huaqing Xiong | 2 | 0 | 0.34 |
Yingbin Liang | 3 | 1646 | 147.64 |
Wei Zhang | 4 | 236 | 33.77 |