Title | ||
---|---|---|
An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks. |
Abstract | ||
---|---|---|
Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76-357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1-20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method. |
Year | DOI | Venue |
---|---|---|
2018 | 10.3390/s18051427 | SENSORS |
Keywords | Field | DocType |
deep learning,sensor fusion,optical flow | Heteroscedasticity,Pattern recognition,Odometry,Embodied cognition,Robustness (computer science),Sensor fusion,Electronic engineering,Artificial intelligence,Engineering,Deep learning,Fuse (electrical),Optical flow | Journal |
Volume | Issue | Citations |
18 | 5.0 | 0 |
PageRank | References | Authors |
0.34 | 7 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
E. Jared Shamwell | 1 | 2 | 0.71 |
William D. Nothwang | 2 | 3 | 3.76 |
Donald Perlis | 3 | 306 | 54.22 |