Title
Real-time 3D rendering using depth-based geometry reconstruction and view-dependent texture mapping.
Abstract
With the recent proliferation of high-fidelity head-mounted displays (HMDs), there is increasing demand for realistic 3D content that can be integrated into virtual reality environments. However, creating photorealistic models is not only difficult but also time consuming. A simpler alternative involves scanning objects in the real world and rendering their digitized counterpart in the virtual world. Capturing objects can be achieved by performing a 3D scan using widely available consumer-grade RGB-D cameras. This process involves reconstructing the geometric model from depth images generated using a structured light or time-of-flight sensor. The colormap is determined by fusing data from multiple color images captured during the scan. Existing methods compute the color of each vertex by averaging the colors from all these images. Blending colors in this manner creates low-fidelity models that appear blurry. (Figure 1 right). Furthermore, this approach also yields textures with fixed lighting that is baked on the model. This limitation becomes more apparent when viewed in head-tracked virtual reality, as the illumination (e.g. specular reflections) does not change appropriately based on the user's viewpoint.
Year
DOI
Venue
2016
10.1145/2945078.2945162
SIGGRAPH Posters
Field
DocType
Citations 
Computer vision,Texture mapping,Structured light,Computer graphics (images),3D rendering,Computer science,Real-time rendering,Geometric modeling,RGB color model,Artificial intelligence,Rendering (computer graphics),3D reconstruction
Conference
0
PageRank 
References 
Authors
0.34
2
3
Name
Order
Citations
PageRank
Chih-Fan Chen111.37
Mark Bolas288089.87
Evan A. Suma378067.37