Title
Adversarially Guided Actor-Critic
Abstract
Despite definite success in deep reinforcement learning problems, actor-critic algorithms are still confronted with sample inefficiency in complex environments. These methods consider a policy (the actor) and a value (the critic) whose respective losses are obtained using different motivations and approaches. We introduce a third protagonist, the adversary. While this adversary mimics the actor by minimizing the KL-divergence between their respective action distributions, the actor maximizes the log-probability difference between its action and that of the adversary in combination with maximizing expected rewards. This novel objective stimulates the actor to follow strategies that could not have been correctly predicted from previous trajectories, making its behavior innovative in tasks where the reward is extremely rare. Our experimental analysis shows that the resulting Adversarially Guided Actor-Critic (AGAC) algorithm leads to more exhaustive exploration. Notably, AGAC outperforms current state-of-the-art methods on a set of various hard-exploration and procedurally-generated tasks.
Year
Venue
DocType
2021
ICLR
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Yannis Flet-Berliac102.03
Johan Ferret210.69
Olivier Pietquin366468.60
Philippe Preux418830.86
Matthieu Geist538544.31