Title
Towards a Meaningful 3D Map Using a 3D Lidar and a Camera.
Abstract
Semantic 3D maps are required for various applications including robot navigation and surveying, and their importance has significantly increased. Generally, existing studies on semantic mapping were camera-based approaches that could not be operated in large-scale environments owing to their computational burden. Recently, a method of combining a 3D Lidar with a camera was introduced to address this problem, and a 3D Lidar and a camera were also utilized for semantic 3D mapping. In this study, our algorithm consists of semantic mapping and map refinement. In the semantic mapping, a GPS and an IMU are integrated to estimate the odometry of the system, and subsequently, the point clouds measured from a 3D Lidar are registered by using this information. Furthermore, we use the latest CNN-based semantic segmentation to obtain semantic information on the surrounding environment. To integrate the point cloud with semantic information, we developed incremental semantic labeling including coordinate alignment, error minimization, and semantic information fusion. Additionally, to improve the quality of the generated semantic map, the map refinement is processed in a batch. It enhances the spatial distribution of labels and removes traces produced by moving vehicles effectively. We conduct experiments on challenging sequences to demonstrate that our algorithm outperforms state-of-the-art methods in terms of accuracy and intersection over union.
Year
DOI
Venue
2018
10.3390/s18082571
SENSORS
Keywords
Field
DocType
3D Lidar,large-scale mapping,map refinement,moving vehicle removal,semantic mapping,semantic reconstruction
Remote sensing,Electronic engineering,Lidar,Engineering
Journal
Volume
Issue
Citations 
18
8.0
1
PageRank 
References 
Authors
0.40
7
3
Name
Order
Citations
PageRank
Jongmin Jeong161.21
Tae Sung Yoon2207.54
Jin Bae Park31351102.77