Title
Reinforcement Learning with Unsupervised Auxiliary Tasks.
Abstract
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880% expert human performance, and a challenging suite of first-person, three-dimensional emph{Labyrinth} tasks leading to a mean speedup in learning of 10$times$ and averaging 87% expert human performance on Labyrinth.
Year
Venue
Field
2016
ICLR
Multi-task learning,Suite,Computer science,Unsupervised learning,Artificial intelligence,Error-driven learning,Machine learning,Learning classifier system,Reinforcement learning,Speedup
DocType
Volume
Citations 
Journal
abs/1611.05397
10
PageRank 
References 
Authors
0.48
0
7
Name
Order
Citations
PageRank
Max Jaderberg1161454.60
Volodymyr Mnih23796158.28
Wojciech Marian Czarnecki333823.53
Tom Schaul491679.40
Leibo, Joel Z.529921.41
David Silver68252363.86
Koray Kavukcuoglu710189504.11