Title
3D Virtual Urban Scene Reconstruction From a Single Optical Remote Sensing Image.
Abstract
This paper presents a low-cost and efficient method for 3D virtual urban scene reconstruction based on multi-source remote sensing big data and deep learning. By integrating maps, satellite optical images, and digital terrain model (DTM), the proposed method achieves a reasonable reconstructed 3D model for complex urban. The method consists of two independent convolutional neural networks (CNN) to process the land cover and the building height extraction. The proposed method is then tested on a 100 km(2) scene in San Diego, USA, including about 30 000 buildings. The land cover classification achieves an overall accuracy (OA) of 80.4% for eight types of land as defined in NLCD 2011 datasets. Building height estimation achieves an average error at 1.9 meters on NYC open data, the building footprint. Furthermore, the scene reconstruction including the estimation of both land cover and building height can be finished in 10 min on a single NVidia Titan X GPU.
Year
DOI
Venue
2019
10.1109/ACCESS.2019.2915932
IEEE ACCESS
Keywords
DocType
Volume
Optical image,urban reconstruction,convolutional neural networks
Journal
7
ISSN
Citations 
PageRank 
2169-3536
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Suo Li120.71
Zhanyu Zhu200.34
Haipeng Wang3999.71
Feng Xu424440.77