Title
Efficient descriptor learning for large scale localization.
Abstract
Many robotics and Augmented Reality (AR) systems that use sparse keypoint-based visual maps operate in large and highly repetitive environments, where pose tracking and localization are challenging tasks. Additionally, these systems usually face further challenges, such as limited computational power, or insufficient memory for storing large maps of the entire environment. Thus, developing compact map representations and improving retrieval is of considerable interest for enabling large-scale visual place recognition and loop-closure. In this paper, we propose a novel approach to compress descriptors while increasing their discriminability and match-ability, based on recent advances in neural networks. At the same time, we target resource-constrained robotics applications in our design choices. The main contributions of this work are twofold. First, we propose a linear projection from descriptor space to a lower-dimensional Euclidean space, based on a novel supervised learning strategy employing a triplet loss. Second, we show the importance of including contextual appearance information to the visual feature in order to improve matching under strong viewpoint, illumination and scene changes. Through detailed experiments on three challenging datasets, we demonstrate significant gains in performance over state-of-the-art methods.
Year
DOI
Venue
2017
10.1109/ICRA.2017.7989359
ICRA
Field
DocType
Volume
Computer vision,Computer science,Visualization,Supervised learning,Augmented reality,Euclidean space,Solid modeling,Artificial intelligence,Artificial neural network,Robot,Robotics
Conference
2017
Issue
Citations 
PageRank 
1
5
0.41
References 
Authors
23
6
Name
Order
Citations
PageRank
Antonio Loquercio150.75
Marcin Dymczyk250.41
Bernhard Zeisl31146.75
Simon Lynen461726.48
Igor Gilitschenski5314.05
Roland Siegwart67640551.49