Abstract | ||
---|---|---|
We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs. |
Year | DOI | Venue |
---|---|---|
2010 | 10.1109/CVPR.2010.5540086 | 2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) |
Keywords | Field | DocType |
data structures,sensors,acceleration,layout,parallel processing,optical sensor,pixel,optical filters,dynamic range,lasers,image sensors,depth map,high resolution,image resolution | Object detection,Computer vision,Data stream mining,Pattern recognition,Image sensor,Computer science,Filter (signal processing),Pixel,Artificial intelligence,Depth map,Upsampling,Image resolution | Conference |
Volume | Issue | ISSN |
2010 | 1 | 1063-6919 |
Citations | PageRank | References |
77 | 3.46 | 15 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jennifer Dolson | 1 | 271 | 14.03 |
Jongmin Baek | 2 | 283 | 14.08 |
Christian Plagemann | 3 | 644 | 39.41 |
Sebastian Thrun | 4 | 20347 | 2302.56 |