Title
Model-Based Offline Policy Optimization with Distribution Correcting Regularization
Abstract
Offline Reinforcement Learning (RL) aims at learning effective policies by leveraging previously collected datasets without further exploration in environments. Model-based algorithms, which first learn a dynamics model using the offline dataset and then conservatively learn a policy under the model, have demonstrated great potential in offline RL. Previous model-based algorithms typically penalize the rewards with the uncertainty of the dynamics model, which, however, is not necessarily consistent with the model error. Inspired by the lower bound on the return in the real dynamics, in this paper we present a model-based alternative called DROP for offline RL. In particular, DROP estimates the density ratio between model-rollouts distribution and offline data distribution via the DICE framework [45], and then regularizes the model-predicted rewards with the ratio for pessimistic policy learning. Extensive experiments show our DROP can achieve comparable or better performance compared to baselines on widely studied offline RL benchmarks.
Year
DOI
Venue
2021
10.1007/978-3-030-86486-6_11
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES
Keywords
DocType
Volume
Offline Reinforcement Learning, Model-based, Reinforcement Learning, Occupancy measure
Conference
12975
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Jian Shen1225.46
Mingcheng Chen212.04
Zhicheng Zhang311.09
Zhengyu Yang4678.51
Weinan Zhang5122897.24
Yong Yu67637380.66