Title
Early Fusion of Camera and Lidar for robust road detection based on U-Net FCN.
Abstract
Automated vehicles rely on the accurate and robust detection of the drivable area, often classified into free space, road area and lane information. Most current approaches use monocular or stereo cameras to detect these. However, LiDAR sensors are becoming more common and offer unique properties for road area detection such as precision and robustness to weather conditions. We therefore propose two approaches for a pixel-wise semantic binary segmentation of the road area based on a modified U-Net Fully Convolutional Network (FCN) architecture. The first approach UView-Cam employs a single camera image, whereas the second approach UGrid-Fused incorporates a early fusion of LiDAR and camera data into a multi-dimensional occupation grid representation as FCN input. The fusion of camera and LiDAR allows for efficient and robust leverage of individual sensor properties in a single FCN. For the training of UView-Cam, multiple publicly available datasets of street environments are used, while the UGrid-Fused is trained with the KITTI dataset. In the KITTI Road/Lane Detection benchmark, the proposed networks reach a MaxF score of 94.23% and 93.81% respectively. Both approaches achieve realtime performance with a detection rate of about 10 Hz.
Year
Venue
Field
2018
Intelligent Vehicles Symposium
Computer vision,Stereo cameras,Computer science,Fusion,Robustness (computer science),Free space,Lidar,Artificial intelligence,Camera image,Monocular,Grid
DocType
Citations 
PageRank 
Conference
1
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Florian Wulff110.34
Bernd Schäufele2113.02
Oliver Sawade3285.84
Daniel Becker4162.73
Birgit Henke510.34
Ilja Radusch624437.15