Title
Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications
Abstract
Inverse reinforcement learning (IRL) infers a reward function from demonstrations, allowing for policy improvement and generalization. However, despite much recent interest in IRL, little work has been done to understand the minimum set of demonstrations needed to teach a specific sequential decision-making task. We formalize the problem of finding maximally informative demonstrations for IRL as a machine teaching problem where the goal is to find the minimum number of demonstrations needed to specify the reward equivalence class of the demonstrator. We extend previous work on algorithmic teaching for sequential decision-making tasks by showing a reduction to the set cover problem which enables an efficient approximation algorithm for determining the set of maximally-informative demonstrations. We apply our proposed machine teaching algorithm to two novel applications: providing a lower bound on the number of queries needed to learn a policy using active IRL and developing a novel IRL algorithm that can learn more efficiently from informative demonstrations than a standard IRL approach.
Year
Venue
Field
2018
national conference on artificial intelligence
Set cover problem,Active learning,Algorithm,Inverse reinforcement learning,Counterfactual thinking,Equivalence (measure theory),Artificial intelligence,Equivalence class,Machine learning,Mathematics,Benchmarking
DocType
Volume
Citations 
Journal
abs/1805.07687
1
PageRank 
References 
Authors
0.36
31
2
Name
Order
Citations
PageRank
Daniel S. Brown1348.67
S. Niekum216523.73