Title
Learning To Match Aerial Images With Deep Attentive Architectures
Abstract
Image matching is a fundamental problem in Computer Vision. In the context of feature-based matching, SIFT and its variants have long excelled in a wide array of applications. However, for ultra-wide baselines, as in the case of aerial images captured under large camera rotations, the appearance variation goes beyond the reach of SIFT and RANSAC. In this paper we propose a data-driven, deep learning-based approach that sidesteps local correspondence by framing the problem as a classification task. Furthermore, we demonstrate that local correspondences can still be useful. To do so we incorporate an attention mechanism to produce a set of probable matches, which allows us to further increase performance. We train our models on a dataset of urban aerial imagery consisting of 'same' and 'different' pairs, collected for this purpose, and characterize the problem via a human study with annotations from Amazon Mechanical Turk. We demonstrate that our models outperform the state-of-the-art on ultra-wide baseline matching and approach human accuracy.
Year
DOI
Venue
2016
10.1109/CVPR.2016.385
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)
Field
DocType
Volume
Framing (construction),Scale-invariant feature transform,Computer vision,Human study,Pattern recognition,Image matching,RANSAC,Computer science,Artificial intelligence,Deep learning,Aerial imagery
Conference
2016
Issue
ISSN
Citations 
1
1063-6919
6
PageRank 
References 
Authors
0.42
20
5
Name
Order
Citations
PageRank
Hani Altwaijry191.14
Eduard Trulls231811.07
James Hays33942172.72
Pascal Fua412768731.45
Serge J. Belongie5125121010.13