Title
Procedural Level Generation Improves Generality of Deep Reinforcement Learning.
Abstract
Over the last few years, deep reinforcement learning (RL) has shown impressive results in a variety of domains, learning directly from high-dimensional sensory streams. However, when networks are trained in a fixed environment, such as a single level in a video game, it will usually overfit and fail to generalize to new levels. When RL agents overfit, even slight modifications to the environment can result in poor agent performance. In this paper, we present an approach to prevent overfitting by generating more general agent controllers, through training the agent on a completely new and procedurally generated level each episode. The level generator generate levels whose difficulty slowly increases in response to the observed performance of the agent. Our results show that this approach can learn policies that generalize better to other procedurally generated levels, compared to policies trained on fixed levels.
Year
Venue
Field
2018
arXiv: Learning
Computer science,Artificial intelligence,Generality,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1806.10729
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Niels Justesen1324.82
Ruben Rodriguez Torrado222.16
Philip Bontrager301.01
Ahmed Aziz Khalifa46912.04
Julian Togelius52765219.94
Sebastian Risi611.70