Title
Learning Goal Conditioned Socially Compliant Navigation From Demonstration Using Risk-Based Features
Abstract
One of the main challenges of operating mobile robots in social environments is the safe and fluid navigation therein, specifically the ability to share a space with other human inhabitants by complying with the explicit and implicit rules that we humans follow during navigation. While these rules come naturally to us, they resist simple and explicit definitions. In this letter, we present a learning-based solution to address the question of socially compliant navigation, which is to navigate while maintaining adherence to the navigational policies a person might use. We infer these policies by learning from human examples using inverse reinforcement learning techniques. In particular, this letter contributes an efficient sampling-based approximation to enable model-free deep inverse reinforcement learning, and a goal conditioned risk-based feature representation that adequately captures local information surrounding the agent. We validate our approach by comparing against a classical algorithm and a reinforcement learning agent and evaluate our feature representation against similar feature representations from the literature. We find that the combination of our proposed method and our feature representation produce higher quality trajectories and that our proposed feature representation plays a critical role in successful navigation.
Year
DOI
Venue
2021
10.1109/LRA.2020.3048657
IEEE Robotics and Automation Letters
Keywords
DocType
Volume
Inverse reinforcement learning,learning from demonstration,motion and path planning,robot navigation,social navigation
Journal
6
Issue
ISSN
Citations 
2
2377-3766
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Abhisek Konar100.34
Bobak H. Baghi200.34
Gregory Dudek376.16