Title
Quantifying Generalization in Reinforcement Learning.
Abstract
In this paper, we investigate the problem of overfitting in deep reinforcement learning. Among the most common benchmarks in RL, it is customary to use the same environments for both training and testing. This practice offers relatively little insight into an agentu0027s ability to generalize. We address this issue by using procedurally generated environments to construct distinct training and test sets. Most notably, we introduce a new environment called CoinRun, designed as a benchmark for generalization in RL. Using CoinRun, we find that agents overfit to surprisingly large training sets. We then show that deeper convolutional architectures improve generalization, as do methods traditionally found in supervised learning, including L2 regularization, dropout, data augmentation and batch normalization.
Year
Venue
Field
2018
international conference on machine learning
Normalization (statistics),Supervised learning,Regularization (mathematics),Artificial intelligence,Overfitting,Machine learning,Mathematics,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1812.02341
4
PageRank 
References 
Authors
0.37
14
5
Name
Order
Citations
PageRank
Karl Cobbe141.05
Oleg Klimov21153.60
Christopher Hesse340.71
Taehoon Kim4437.51
John Schulman5180666.95