Title
Off-Policy Evaluation via Off-Policy Classification.
Abstract
In this work, we consider the problem of model selection for deep reinforcement learning (RL) in real-world environments. Typically, the performance of deep RL algorithms is evaluated via on-policy interactions with the target environment. However, comparing models in a real-world environment for the purposes of early stopping or hyperparameter tuning is costly and often practically infeasible. This leads us to examine off-policy policy evaluation (OPE) in such settings. We focus on OPE for value-based methods, which are of particular interest in deep RL, with applications like robotics, where off-policy algorithms based on Q-function estimation can often attain better sample complexity than direct policy optimization. Existing OPE metrics either rely on a model of the environment, or the use of importance sampling (IS) to correct for the data being off-policy. However, for high-dimensional observations, such as images, models of the environment can be difficult to fit and value-based methods can make IS hard to use or even ill-conditioned, especially when dealing with continuous action spaces. In this paper, we focus on the specific case of MDPs with continuous action spaces and sparse binary rewards, which is representative of many important real-world applications. We propose an alternative metric that relies on neither models nor IS, by framing OPE as a positive-unlabeled (PU) classification problem with the Q-function as the decision function. We experimentally show that this metric outperforms baselines on a number of tasks. Most importantly, it can reliably predict the relative performance of different policies in a number of generalization scenarios, including the transfer to the real-world of policies trained in simulation for an image-based robotic manipulation task.
Year
Venue
Keywords
2019
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)
sample complexity,importance sampling,model selection,early stopping,world environment,models of environment
Field
DocType
Volume
Framing (construction),Early stopping,Importance sampling,Hyperparameter,Computer science,Model selection,Artificial intelligence,Robotics,Machine learning,Binary number,Reinforcement learning
Journal
32
ISSN
Citations 
PageRank 
1049-5258
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Alex Irpan1395.40
Kanishka Rao218911.94
Konstantinos Bousmalis333614.77
Chris Harris441.83
Julian Ibarz521728.98
Sergey Levine63377182.21
Irpan, Alexander700.34