Abstract | ||
---|---|---|
This paper investigates neural network architectures that fuse feature-level data of radar and vision sensors in order to improve automotive environment perception for advanced driver assistance systems. Fusion is performed with occupancy grids, which incorporate sensor-specific information mapped from their individual detection lists. The fusion step is evaluated on three types of neural networks: (1) fully convolutional, (2) auto-encoder and (3) auto-encoder with skipped connections. These networks are trained to fuse radar and camera occupancy grids with the ground truth obtained from lidar scans. A detailed analysis of network architectures and parameters is performed. Results are compared to classical Bayesian occupancy fusion on typical evaluation metrics for pixel-wise classification tasks, like intersection over union and pixel accuracy. This paper shows that it is possible to perform grid fusion of feature-level sensor data with the proposed system architecture. Especially the auto-encoder architectures show significant improvements in evaluation metrics compared to classical Bayesian fusion method. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/ICCVE45908.2019.8965213 | 2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE) |
Keywords | Field | DocType |
sensor fusion,environmental modeling,camera-radar fusion,convolutional neural network | Radar,Pattern recognition,Convolutional neural network,Computer science,Advanced driver assistance systems,Network architecture,Sensor fusion,Ground truth,Artificial intelligence,Artificial neural network,Grid | Conference |
ISSN | ISBN | Citations |
2378-1289 | 978-1-7281-0143-9 | 0 |
PageRank | References | Authors |
0.34 | 5 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Gábor Balázs | 1 | 0 | 0.34 |
Walter Stechele | 2 | 365 | 52.77 |