Title
Dense Reconstruction from Monocular Slam with Fusion of Sparse Map-Points and Cnn-Inferred Depth
Abstract
Real-time monocular visual SLAM approaches relying on building sparse correspondences between two or multiple views of the scene, are capable of accurately tracking camera pose and inferring structure of the environment. However, these methods have the common problem, i.e., the reconstructed 3D map is extremely sparse. Recently, convolutional neural network (CNN) is widely used for estimating scene depth from monocular color images. As we observe, sparse map-points generated from epipolar geometry are locally accurate, while CNN-inferred depth map contains high-level global context but generates blurry depth boundaries. Therefore, we propose a depth fusion framework to yield a dense monocular reconstruction that fully exploits the sparse depth samples and the CNN-inferred depth. Color key-frames are employed to guide the depth reconstruction process, avoiding smoothing over depth boundaries. Experimental results on benchmark datasets show the robustness and accuracy of our method.
Year
DOI
Venue
2018
10.1109/ICME.2018.8486548
2018 IEEE International Conference on Multimedia and Expo (ICME)
Keywords
Field
DocType
Dense reconstruction,Visual SLAM,Monocular,Sparse map-point,Depth prediction
Iterative reconstruction,Computer vision,Pattern recognition,Epipolar geometry,Convolutional neural network,Computer science,Robustness (computer science),Smoothing,Artificial intelligence,Depth map,Simultaneous localization and mapping,Monocular
Conference
ISSN
ISBN
Citations 
1945-7871
978-1-5386-1738-0
0
PageRank 
References 
Authors
0.34
8
4
Name
Order
Citations
PageRank
Xiang Ji12011.57
Xinchen Ye296.90
Hongcan Xu300.34
Haojie Li4329.03