Title
COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation
Abstract
We consider the offline constrained reinforcement learning (RL) problem, in which the agent aims to compute a policy that maximizes expected return while satisfying given cost constraints, learning only from a pre-collected dataset. This problem setting is appealing in many real-world scenarios, where direct interaction with the environment is costly or risky, and where the resulting policy should comply with safety constraints. However, it is challenging to compute a policy that guarantees satisfying the cost constraints in the offline RL setting, since the off-policy evaluation inherently has an estimation error. In this paper, we present an offline constrained RL algorithm that optimizes the policy in the space of the stationary distribution. Our algorithm, COptiDICE, directly estimates the stationary distribution corrections of the optimal policy with respect to returns, while constraining the cost upper bound, with the goal of yielding a cost-conservative policy for actual constraint satisfaction. Experimental results show that COptiDICE attains better policies in terms of constraint satisfaction and return-maximization, outperforming baseline algorithms.
Year
Venue
Keywords
2022
International Conference on Learning Representations (ICLR)
Offline Reinforcement Learning,Offline Constrained Reinforcement Learning,Stationary Distribution Correction Estimation
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Jongmin Lee101.35
Cosmin Păduraru221.72
Daniel J. Mankowitz3298.05
Nicolas Heess4176294.77
Doina Precup52829221.83
Kee-Eung Kim637545.28
Arthur Guez72481100.43