Title
Rapidly exploring learning trees.
Abstract
Inverse Reinforcement Learning (IRL) for path planning enables robots to learn cost functions for difficult tasks from demonstration, instead of hard-coding them. However, IRL methods face practical limitations that stem from the need to repeat costly planning procedures. In this paper, we propose Rapidly Exploring Learning Trees (RLT∗), which learns the cost functions of Optimal Rapidly Exploring Random Trees (RRT∗) from demonstration, thereby making inverse learning methods applicable to more complex tasks. Our approach extends Maximum Margin Planning to work with RRT∗ cost functions. Furthermore, we propose a caching scheme that greatly reduces the computational cost of this approach. Experimental results on simulated and real-robot data from a social navigation scenario show that RLT∗ achieves better performance at lower computational cost than existing methods. We also successfully deploy control policies learned with RLT ∗ on a real telepresence robot.
Year
DOI
Venue
2017
10.1109/ICRA.2017.7989184
ICRA
Field
DocType
Volume
Motion planning,Computer science,Inverse reinforcement learning,Artificial intelligence,Robot,Telerobotics,Machine learning,Trajectory,Mobile robot,Social navigation
Conference
2017
Issue
Citations 
PageRank 
1
5
0.46
References 
Authors
11
3
Name
Order
Citations
PageRank
Kyriacos Shiarlis1263.90
João V. Messias2264.77
Shimon Whiteson3146099.00