Abstract | ||
---|---|---|
We propose a method for computing a depth map at interactive rates from a set of closely spaced calibrated video cameras and a Time-of-Flight (ToF) camera. The objective is to synthesize free viewpoint videos in real-time. All computations are performed on the graphics processing unit, leaving the CPU available for other tasks. Depth information is computed from color camera data in textured regions and from ToF data in textureless ones. The trade-off between these two sources is determined locally based on the reliability of the depth estimates obtained from the color images. For this purpose, a confidence measure taking into account the shape of the photo-consistency score as a function of depth is used. The final depth map is computed by minimizing a cost function. This approach offers a significant time savings relative to other methods that apply denoising to the photo-consistency score maps, obtained at every depth, and importantly, still obtains acceptable quality of the rendered image. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1007/s00138-012-0428-2 | Mach. Vis. Appl. |
Keywords | DocType | Volume |
Image-based rendering,Stereo,Range data,Sensor fusion,Graphics processors | Journal | 24 |
Issue | ISSN | Citations |
4 | 0932-8092 | 1 |
PageRank | References | Authors |
0.35 | 11 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Stéphane Pelletier | 1 | 6 | 1.10 |
Jeremy R. Cooperstock | 2 | 449 | 102.09 |