Title
Fast Semi-Dense 3d Semantic Mapping With Monocular Visual Slam
Abstract
Fast 3D reconstruction with semantic information on road scenarios involves issues of geometry and appearance in the field of computer vision. An important idea is that fusion of geometry and appearance would boost the performance of each other. Stereo cameras and RGB-D sensors are widely used to realise 3D reconstruction and trajectory tracking, but they need heavy computation cost and storage in a dense way. Moreover, they lack flexibility of seamless switch between various scaled environments, i.e., indoor and outdoor scenes. Besides, autonomous vehicle would benefit from semantic information on the road, but it is hard to be sufficiently accurate and consistent in the 3D mapping. We address this challenge by fusion of direct Simultaneous Localisation and Mapping (SLAM) from a monocular camera in a semi-dense way and the state-of-the-art approaches of deep neural network. In our approach, 2D semantic information is transferred to 3D mapping via correspondence between consecutive Keyframes with spatial consistency. Since there are lots of semantic redundancies in consecutive frames, it is not necessary to process semantic segmentation for each frame in the entire sequence. Consequently, the segmentation could run in a reasonable speed (about 20Hz). We thoroughly evaluate our methods on road scene datasets. The experiments show a promising 3D semantic mapping in various road scenes.
Year
Venue
Field
2017
2017 IEEE 20TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC)
Computer vision,Stereo cameras,Semantic mapping,Segmentation,Artificial intelligence,Engineering,Monocular,Artificial neural network,Trajectory,3D reconstruction,Computation
DocType
ISSN
Citations 
Conference
2153-0009
1
PageRank 
References 
Authors
0.35
0
4
Name
Order
Citations
PageRank
Xuanpeng Li1154.48
Huanxuan Ao210.35
Rachid Belaroussi39213.17
Dominique Gruyer448552.30