Title
Learning values across many orders of magnitude.
Abstract
Most learning algorithms are not invariant to the scale of the signal that is being approximated. We propose to adaptively normalize the targets used in the learning updates. This is important in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior. Using adaptive normalization we can remove this domain-specific heuristic without diminishing overall performance.
Year
Venue
Field
2016
neural information processing systems
Online machine learning,Heuristic,Normalization (statistics),Semi-supervised learning,Stability (learning theory),Active learning (machine learning),Computer science,Q-learning,Artificial intelligence,Machine learning,Reinforcement learning
DocType
Citations 
PageRank 
Conference
1
0.35
References 
Authors
0
5
Name
Order
Citations
PageRank
hado van hasselt143231.39
Arthur Guez22481100.43
Matteo Hessel313310.65
Volodymyr Mnih43796158.28
David Silver58252363.86