Title
Imitation Learning via Off-Policy Distribution Matching
Abstract
When performing imitation learning from expert demonstrations, distribution matching is a popular approach, in which one alternates between estimating distribution ratios and then using these ratios as rewards in a standard reinforcement learning (RL) algorithm. Traditionally, estimation of the distribution ratio requires on-policy data, which has caused previous work to either be exorbitantly data- inefficient or alter the original objective in a manner that can drastically change its optimum. In this work, we show how the original distribution ratio estimation objective may be transformed in a principled manner to yield a completely off-policy objective. In addition to the data-efficiency that this provides, we are able to show that this objective also renders the use of a separate RL optimization unnecessary. Rather, an imitation policy may be learned directly from this objective without the use of explicit rewards. We call the resulting algorithm ValueDICE and evaluate it on a suite of popular imitation learning benchmarks, finding that it can achieve state-of-the-art sample efficiency and performance.
Year
Venue
Keywords
2020
ICLR
reinforcement learning, deep learning, imitation learning, adversarial learning
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
12
3
Name
Order
Citations
PageRank
Ilya Kostrikov163.14
Ofir Nachum29412.01
Jonathan Tompson373932.92