Abstract | ||
---|---|---|
Drivable space detection or road perception is one of the most important tasks for autonomous driving. Sensorbased vision/laser systems may have limited performance in bad illumination/weather conditions, a prior knowledge of the road from the map data is expected to improve the effectiveness. This paper is to employ the map information extracted from the OpenStreetMap (OSM) data, and explore its capability for road perception. The OSM data can be used to render virtual street views, and further refined to provide the prior road mask. The OSM masks can be also combine with image processing and Lidar point clouding approaches to characterize the drivable space. Using a Fully Convolutional Neural Network (FCNN), the OSM availability for deep learning methods is also discussed. |
Year | Venue | Field |
---|---|---|
2018 | Intelligent Vehicles Symposium | Computer vision,Convolutional neural network,Computer science,Image processing,Image segmentation,Lidar,Global Positioning System,Artificial intelligence,Deep learning,Perception |
DocType | Citations | PageRank |
Conference | 2 | 0.38 |
References | Authors | |
0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
yang zheng | 1 | 35 | 8.53 |
Izzat Izzat | 2 | 7 | 1.52 |