Title
Inverse Reinforcement Learning from Failure.
Abstract
Inverse reinforcement learning (IRL) allows autonomous agents to learn to solve complex tasks from successful demonstrations. However, in many settings, e.g., when a human learns the task by trial and error, failed demonstrations are also readily available. In addition, in some tasks, purposely generating failed demonstrations may be easier than generating successful ones. Since existing IRL methods cannot make use of failed demonstrations, in this paper we propose inverse reinforcement learning from failure (IRLF) which exploits both successful and failed demonstrations. Starting from the state-of-the-art maximum causal entropy IRL method, we propose a new constrained optimisation formulation that accommodates both types of demonstrations while remaining convex. We then derive update rules for learning reward functions and policies. Experiments on both simulated and real-robot data demonstrate that IRLF converges faster and generalises better than maximum causal entropy IRL, especially when few successful demonstrations are available.
Year
DOI
Venue
2016
10.5555/2936924.2937079
AAMAS
Keywords
Field
DocType
Inverse reinforcement learning,learning from demonstration,social navigation,robotics,machine learning
Autonomous agent,Trial and error,Computer science,Exploit,Inverse reinforcement learning,Learning from demonstration,Artificial intelligence,Machine learning,Robotics,Social navigation
Conference
Citations 
PageRank 
References 
9
0.56
14
Authors
3
Name
Order
Citations
PageRank
Kyriacos Shiarlis1263.90
João V. Messias2264.77
Shimon Whiteson3146099.00