Title
Value-driven Hindsight Modelling
Abstract
Value estimation is a critical component of the reinforcement learning (RL) paradigm. The question of how to effectively learn predictors for value from data is one of the major problems studied by the RL community, and different approaches exploit structure in the problem domain in different ways. Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function. In contrast, model-free methods directly leverage the quantity of interest from the future but have to compose with a potentially weak scalar signal (an estimate of the return). In this paper we develop an approach for representation learning in RL that sits in between these two extremes: we propose to learn what to model in a way that can directly help value prediction. To this end we determine which features of the future trajectory provide useful information to predict the associated return. This provides us with tractable prediction targets that are directly relevant for a task, and can thus accelerate learning of the value function. The idea can be understood as reasoning, in hindsight, about which aspects of the future observations could help past value prediction. We show how this can help dramatically even in simple policy evaluation settings. We then test our approach at scale in challenging domains, including on 57 Atari 2600 games.
Year
Venue
DocType
2020
NIPS 2020
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
8
Name
Order
Citations
PageRank
Arthur Guez12481100.43
Fabio Viola22028.87
Theophane Weber315916.79
Lars Buesing424816.50
Steven Kapturowski5113.18
Doina Precup62829221.83
David Silver78252363.86
Nicolas Heess8176294.77