Title
Importance Sampling Policy Evaluation with an Estimated Behavior Policy.
Abstract
In reinforcement learning, off-policy evaluation is the task of using data generated by one policy to determine the expected return of a second policy. Importance sampling is a standard technique for off-policy evaluation, allowing off-policy data to be used as if it were on-policy. When the policy that generated the off-policy data is unknown, the ordinary importance sampling estimator cannot be applied. In this paper, we study a family of regression importance sampling (RIS) methods that apply importance sampling by first estimating the behavior policy. We find that these estimators give strong empirical performance---surprisingly often outperforming importance sampling with the true behavior policy in both discrete and continuous domains. Our results emphasize the importance of estimating the behavior policy using only the data that will also be used for the importance sampling estimate.
Year
Venue
Field
2018
international conference on machine learning
Econometrics,Mathematical optimization,Importance sampling,Markov process,Markov decision process,Mean squared error,Expected return,Mathematics,Estimator
DocType
Volume
Citations 
Journal
abs/1806.01347
2
PageRank 
References 
Authors
0.37
7
3
Name
Order
Citations
PageRank
Josiah Hanna1239.28
S. Niekum216523.73
Peter Stone36878688.60