Title
Combining policy gradient and Q-learning
Abstract
Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQL’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQL. In particular, we tested PGQL on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.
Year
Venue
Field
2017
international conference on learning representations
Asynchronous communication,Mathematical optimization,Suite,Computer science,Q-learning,Function learning,Artificial intelligence,Fixed point,Machine learning,Data efficiency,Reinforcement learning
DocType
Citations 
PageRank 
Conference
30
1.38
References 
Authors
12
4
Name
Order
Citations
PageRank
Brendan O'Donoghue1302.39
Rémi Munos22240157.06
Koray Kavukcuoglu310189504.11
Volodymyr Mnih43796158.28