Title
Learning to Navigate: Exploiting Deep Networks to Inform Sample-Based Planning During Vision-Based Navigation.
Abstract
Recent applications of deep learning to navigation have generated end-to-end navigation solutions whereby visual sensor input is mapped to control signals or to motion primitives. The resulting visual navigation strategies work very well at collision avoidance and have performance that matches traditional reactive navigation algorithms while operating in real-time. It is accepted that these solutions cannot provide the same level of performance as a global planner. However, it is less clear how such end-to-end systems should be integrated into a full navigation pipeline. We evaluate the typical end-to-end solution within a full navigation pipeline in order to expose its weaknesses. Doing so illuminates how to better integrate deep learning methods into the navigation pipeline. In particular, we show that they are an efficient means to provide informed samples for sample-based planners. Controlled simulations with comparison against traditional planners show that the number of samples can be reduced by an order of magnitude while preserving navigation performance. Implementation on a mobile robot matches the simulated performance outcomes.
Year
Venue
Field
2018
arXiv: Robotics
Simulation,Planner,Visual navigation,Collision,Vision based,Human–computer interaction,Artificial intelligence,Engineering,Deep learning,Mobile robot
DocType
Volume
Citations 
Journal
abs/1801.05132
0
PageRank 
References 
Authors
0.34
4
4
Name
Order
Citations
PageRank
Justin Smith19711.74
Jin-Ha Hwang200.34
Fu-Jen Chu3385.39
Patricio A Vela436939.12