Title
Mean Actor Critic.
Abstract
We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent's explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. We prove that this approach reduces variance in the policy gradient estimate relative to traditional actor-critic methods. We show empirical results on two control domains and on six Atari games, where MAC is competitive with state-of-the-art policy search algorithms.
Year
Venue
DocType
2017
CoRR
Journal
Volume
Citations 
PageRank 
abs/1709.00503
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
kavosh asadi1827.28
Cameron Allen200.68
Melrose Roderick301.01
Abdel-rahman Mohamed43772266.13
George Konidaris580159.30
Michael L. Littman69798961.84