Title
High Confidence Off-Policy Evaluation with Models.
Abstract
In many reinforcement learning applications executing a poor policy may be costly or even dangerous. Thus, it is desirable to determine confidence interval lower bounds on the performance of any given policy without executing said policy. Current methods for high confidence off-policy evaluation require a substantial amount of data to achieve a tight lower bound, while existing model-based methods only address the problem in discrete state spaces. We propose two bootstrapping approaches combined with learned MDP transition models in order to efficiently estimate lower confidence bounds on policy performance with limited data in both continuous and discrete state spaces. Since direct use of a model may introduce bias, we derive a theoretical upper bound on model bias when we estimate the model transitions with i.i.d. sampled trajectories. This bound can be used to guide selection between the two methods. Finally, we empirically validate the data-efficiency of our proposed methods across three domains and analyze the settings where one method is preferable to the other.
Year
Venue
Field
2016
arXiv: Artificial Intelligence
Data mining,Confidence bounds,Upper and lower bounds,Bootstrapping,Computer science,Artificial intelligence,Confidence interval,Machine learning,Reinforcement learning
DocType
Volume
Citations 
Journal
abs/1606.06126
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Josiah Hanna1239.28
Peter Stone26878688.60
S. Niekum316523.73