Title
Adaptive Temporal-Difference Learning for Policy Evaluation with Per-State Uncertainty Estimates
Abstract
We consider the core reinforcement-learning problem of on-policy value function approximation from a batch of trajectory data, and focus on various issues of Temporal Difference (TD) learning and Monte Carlo (MC) policy evaluation. The two methods are known to achieve complementary bias-variance trade-off properties, with TD tending to achieve lower variance but potentially higher bias. In this paper, we argue that the larger bias of TD can be a result of the amplification of local approximation errors. We address this by proposing an algorithm that adaptively switches between TD and MC in each state, thus mitigating the propagation of errors. Our method is based on learned confidence intervals that detect biases of TD estimates. We demonstrate in a variety of policy evaluation tasks that this simple adaptive algorithm performs competitively with the best approach in hindsight, suggesting that learned confidence intervals are a powerful technique for adapting policy evaluation to use TD or MC returns in a data-driven way.
Year
Venue
Keywords
2019
NeurIPS
temporal difference learning,propagation of errors
Field
DocType
Citations 
Temporal difference learning,Computer science,Artificial intelligence,Machine learning
Conference
0
PageRank 
References 
Authors
0.34
0
8
Name
Order
Citations
PageRank
Carlos Riquelme1112.30
Hugo Penedones2362.46
Damien Vincent301.01
Hartmut Maennel421.73
Sylvain Gelly576059.74
Timothy Arthur Mann69715.88
André Barreto7125.65
Gergely Neu830025.03