Title
SPP-Net: Deep Absolute Pose Regression with Synthetic Views.
Abstract
Image based localization is one of the important problems in computer vision due to its wide applicability in robotics, augmented reality, and autonomous systems. There is a rich set of methods described in the literature how to geometrically register a 2D image w.r.t. a 3D model. Recently, methods based on deep (and convolutional) feedforward networks (CNNs) became popular for pose regression. However, these CNN-based methods are still less accurate than geometry based methods despite being fast and memory efficient. In this work we design a deep neural network architecture based on sparse feature descriptors to estimate the absolute pose of an image. Our choice of using sparse feature descriptors has two major advantages: first, our network is significantly smaller than the CNNs proposed in the literature for this task---thereby making our approach more efficient and scalable. Second---and more importantly---, usage of sparse features allows to augment the training data with synthetic viewpoints, which leads to substantial improvements in the generalization performance to unseen poses. Thus, our proposed method aims to combine the best of the two worlds---feature-based localization and CNN-based pose regression--to achieve state-of-the-art performance in the absolute pose estimation. A detailed analysis of the proposed architecture and a rigorous evaluation on the existing datasets are provided to support our method.
Year
Venue
Field
2017
arXiv: Computer Vision and Pattern Recognition
Pattern recognition,Regression,Computer science,Image based,Pose,Augmented reality,Artificial intelligence,Autonomous system (Internet),Robotics,Machine learning,Feed forward,Scalability
DocType
Volume
Citations 
Journal
abs/1712.03452
2
PageRank 
References 
Authors
0.39
18
3
Name
Order
Citations
PageRank
Pulak Purkait111310.71
cheng zhao2244.81
Christopher Zach3145784.01