Title
Semantically-Aware Aerial Reconstruction From Multi-Modal Data
Abstract
We consider a methodology for integrating multiple sensors along with semantic information to enhance scene representations. We propose a probabilistic generative model for inferring semantically-informed aerial reconstructions from multi-modal data within a consistent mathematical framework. The approach, called Semantically-Aware Aerial Reconstruction (SAAR), not only exploits inferred scene geometry, appearance, and semantic observations to obtain a meaningful categorization of the data, but also extends previously proposed methods by imposing structure on the prior over geometry, appearance, and semantic labels. This leads to more accurate reconstructions and the ability to fill in missing contextual labels via joint sensor and semantic information. We introduce a new multi-modal synthetic dataset in order to provide quantitative performance analysis. Additionally, we apply the model to real-world data and exploit OpenStreetMap as a source of semantic observations. We show quantitative improvements in reconstruction accuracy of large-scale urban scenes from the combination of LiDAR, aerial photography, and semantic data. Furthermore, we demonstrate the model's ability to fill in for missing sensed data, leading to more interpretable reconstructions.
Year
DOI
Venue
2015
10.1109/ICCV.2015.249
ICCV
Field
DocType
Volume
Data mining,Aerial photography,Computer science,Lidar,Artificial intelligence,Probabilistic generative model,Semantic data model,Computer vision,Categorization,Pattern recognition,Exploit,Multiple sensors,Modal
Conference
2015
Issue
ISSN
Citations 
1
1550-5499
6
PageRank 
References 
Authors
0.42
30
3
Name
Order
Citations
PageRank
Randi Cabezas180.77
Julian Straub21288.19
John W. Fisher III387874.44