Title
Generative adversarial exploration for reinforcement learning
Abstract
Exploration is crucial for training the optimal reinforcement learning (RL) policy, where the key is to discriminate whether a state visiting is novel. Most previous work focuses on designing heuristic rules or distance metrics to check whether a state is novel without considering such a discrimination process that can be learned. In this paper, we propose a novel method called generative adversarial exploration (GAEX) to encourage exploration in RL via introducing an intrinsic reward output from a generative adversarial network, where the generator provides fake samples of states that help discriminator identify those less frequently visited states. Thus the agent is encouraged to visit those states which the discriminator is less confident to judge as visited. GAEX is easy to implement and of high training efficiency. In our experiments, we apply GAEX into DQN and the DQN-GAEX algorithm achieves convincing performance on challenging exploration problems, including the game Venture, Montezuma's Revenge and Super Mario Bros, without further fine-tuning on complicate learning algorithms. To our knowledge, this is the first work to employ GAN in RL exploration problems.
Year
DOI
Venue
2019
10.1145/3356464.3357706
Proceedings of the First International Conference on Distributed Artificial Intelligence
Keywords
DocType
ISBN
exploration, generative adversarial network, reinforcement learning
Conference
978-1-4503-7656-3
Citations 
PageRank 
References 
0
0.34
0
Authors
7
Name
Order
Citations
PageRank
Weijun Hong100.34
Menghui Zhu201.01
Minghuan Liu311.37
Weinan Zhang4122897.24
Ming Zhou54262251.74
Yong Yu67637380.66
Peng Sun7111.63