Title
Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration Under Uncertainty
Abstract
This paper studies the problem of autonomous exploration under localization uncertainty for a mobile robot with 3D range sensing. We present a framework for self-learning a high-performance exploration policy in a single simulation environment, and transferring it to other environments, which may be physical or virtual. Recent work in transfer learning achieves encouraging performance by domain adaptation and domain randomization to expose an agent to scenarios that fill the inherent gaps in sim2sim and sim2real approaches. However, it is inefficient to train an agent in environments with randomized conditions to learn the important features of its current state. An agent can use domain knowledge provided by human experts to learn efficiently. We propose a novel approach that uses graph neural networks in conjunction with deep reinforcement learning, enabling decision-making over graphs containing relevant exploration information provided by human experts to predict a robot's optimal sensing action in belief space. The policy, which is trained only in a single simulation environment, offers a real-time, scalable, and transferable decision-making strategy, resulting in zero-shot transfer to other simulation environments and even real-world environments.
Year
DOI
Venue
2021
10.1109/ICRA48506.2021.9561917
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021)
DocType
Volume
Issue
Conference
2021
1
ISSN
Citations 
PageRank 
1050-4729
0
0.34
References 
Authors
6
7
Name
Order
Citations
PageRank
Fanfei Chen100.68
Szenher Paul210.74
Yewei Huang300.68
Jinkun Wang475.91
Tixiao Shan5134.33
Shi Bai6757.17
Brendan Englot722121.53