Abstract | ||
---|---|---|
We consider an autonomous exploration problem in which a range-sensing mobile robot is tasked with accurately mapping the landmarks in an a priori unknown environment efficiently in real-time; it must choose sensing actions that both curb localization uncertainty and achieve information gain. For this problem, belief space planning methods that forward-simulate robot sensing and estimation may often fail in real-time implementation, scaling poorly with increasing size of the state, belief and action spaces. We propose a novel approach that uses graph neural networks (GNNs) in conjunction with deep reinforcement learning (DRL), enabling decision-making over graphs containing exploration information to predict a robot's optimal sensing action in belief space. The policy, which is trained in different random environments without human intervention, offers a real-time, scalable decision-making process whose high-performance exploratory sensing actions yield accurate maps and high rates of information gain. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/IROS45743.2020.9341657 | IROS |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Fanfei Chen | 1 | 0 | 0.68 |
John Martin | 2 | 2 | 5.18 |
Yewei Huang | 3 | 2 | 1.41 |
Jinkun Wang | 4 | 7 | 5.91 |
Brendan Englot | 5 | 221 | 21.53 |