Title
Critic Regularized Regression
Abstract
Offline reinforcement learning (RL), also known as batch RL, offers the prospect of policy optimization from large pre-recorded datasets without online environment interaction. It addresses challenges with regard to the cost of data collection and safety, both of which are particularly pertinent to real-world applications of RL. Unfortunately, most off-policy algorithms perform poorly when learning from a fixed dataset. In this paper, we propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR). We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces -- outperforming several state-of-the-art offline RL algorithms by a significant margin on a wide range of benchmark tasks.
Year
Venue
DocType
2020
NIPS 2020
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
11
Name
Order
Citations
PageRank
Ziyu Wang137223.71
Alexander Novikov2987.62
Żołna Konrad374.51
Josh S. Merel414311.34
Jost Tobias Springenberg5112662.86
s reed6175080.25
Bobak Shahriari728312.43
Noah Siegel852.48
Çaglar Gülçehre93010133.22
Nicolas Heess10176294.77
Nando De Freitas113284273.68