Title
Observe and Look Further: Achieving Consistent Performance on Atari.
Abstract
Despite significant advances in the field of deep Reinforcement Learning (RL), todayu0027s algorithms still fail to learn human-level policies consistently over a set of diverse tasks such as Atari 2600 games. We identify three key challenges that any algorithm needs to master in order to perform well on all games: processing diverse reward distributions, reasoning over long time horizons, and exploring efficiently. In this paper, we propose an algorithm that addresses each of these challenges and is able to learn human-level policies on nearly all Atari games. A new transformed Bellman operator allows our algorithm to process rewards of varying densities and scales; an auxiliary temporal consistency loss allows us to train stably using a discount factor of $gamma = 0.999$ (instead of $gamma = 0.99$) extending the effective planning horizon by an order of magnitude; and we ease the exploration problem by using human demonstrations that guide the agent towards rewarding states. When tested on a set of 42 Atari games, our algorithm exceeds the performance of an average human on 40 games using a common set of hyper parameters. Furthermore, it is the first deep RL algorithm to solve the first level of Montezumau0027s Revenge.
Year
Venue
Field
2018
arXiv: Learning
Mathematical optimization,Time horizon,Discounting,Exploration problem,Operator (computer programming),Artificial intelligence,Temporal consistency,Mathematics,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1805.11593
10
PageRank 
References 
Authors
0.46
10
13
Name
Order
Citations
PageRank
Tobias Pohlen1100.46
Bilal Piot233520.65
Todd Hester333031.53
Mohammad Gheshlaghi Azar423815.60
Dan Horgan51054.38
David Budden616718.45
Gabriel Barth-Maron7895.30
hado van hasselt843231.39
john quan933913.28
Mel Vecerík10122.28
Matteo Hessel1113310.65
Rémi Munos122240157.06
Olivier Pietquin1366468.60