Title
Learning and Querying Fast Generative Models for Reinforcement Learning.
Abstract
A key challenge in model-based reinforcement learning (RL) is to synthesize computationally efficient and accurate environment models. We show that carefully designed generative models that learn and operate on compact state representations, so-called state-space models, substantially reduce the computational costs for predicting outcomes of sequences of actions. Extensive experiments establish that state-space models accurately capture the dynamics of Atari games from the Arcade Learning Environment from raw pixels. The computational speed-up of state-space models while maintaining high accuracy makes their application in RL feasible: We demonstrate that agents which query these models for decision making outperform strong model-free baselines on the game MSPACMAN, demonstrating the potential of using learned environment models for planning.
Year
Venue
Field
2018
arXiv: Learning
Artificial intelligence,Learning environment,Pixel,Generative grammar,Machine learning,Mathematics,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1802.03006
12
PageRank 
References 
Authors
0.46
20
11
Name
Order
Citations
PageRank
Lars Buesing124816.50
Theophane Weber215916.79
Sébastien Racanière3281.42
s m ali eslami41198.58
Danilo Jimenez Rezende5156781.67
David P. Reichert6886.85
Fabio Viola72028.87
Frederic Besse81005.17
Karol Gregor9117372.53
Demis Hassabis104924191.12
Daan Wierstra115412255.92