Title
Addressing Sample Inefficiency and Reward Bias in Inverse Reinforcement Learning.
Abstract
The Generative Adversarial Imitation Learning (GAIL) framework from Ho u0026 Ermon (2016) is known for being surprisingly sample efficient in terms of demonstrations provided by an expert policy. However, the algorithm requires a significantly larger number of policy interactions with the environment in order to imitate the expert. In this work we address this problem by proposing a sample efficient algorithm for inverse reinforcement learning that incorporates both off-policy reinforcement learning and adversarial imitation learning. We also show that GAIL has a number of biases associated with the choice of reward function, which can unintentionally encode prior knowledge of some tasks, and prevent learning in others. We address these shortcomings by analyzing the issue and correcting invalid assumptions used when defining the learned reward function. We demonstrate that our algorithm achieves state-of-the-art performance for an inverse reinforcement learning framework on a variety of standard benchmark tasks, and from demonstrations provided from both learned agents and human experts.
Year
Venue
DocType
2018
arXiv: Learning
Journal
Volume
Citations 
PageRank 
abs/1809.02925
2
0.37
References 
Authors
25
4
Name
Order
Citations
PageRank
Ilya Kostrikov163.14
Kumar Krishna Agrawal273.49
Sergey Levine33377182.21
Jonathan Tompson473932.92