Abstract | ||
---|---|---|
Summary form only given. The ICT Mixed Reality Lab will demonstrate a pipeline for rapidly generating personalized avatars from multiple depth and RGB scans of a user with a consumer level sensor such as a Microsoft Kinect. Based on a fusion of state-of-the-art techniques in graphics, surface reconstruction, and animation, our semi-automatic method can produce a fully rigged, skinned, and textured character model suitable for real-time virtual environments in less than 15 minutes. First, a 3D point cloud is collected from the sensor using a simultaneous localization and mapping (SLAM) approach to track the device's movements over time (see Figure 1.a). Next, surface reconstruction techniques are employed to generate a watertight 3D mesh from the raw 3D points (see Figure 1.b). The resulting model is then analyzed to determine the human joint locations, and if a skeleton can be successfully generated for the model, the mesh is then rigged and skinned using weights that are calculated automatically [2]. Finally, photos captured periodically during the scanning process using the sensor's RGB camera are used to texture the final model. The resulting avatar is suitable for real-time animation using a virtual environment or video game engine (see Figure 1.c). We will demonstrate our avatar generation pipeline at IEEE Virtual Reality 2013. Conference attendees may opt to be scanned, and their generated avatar will be provided to them either on a USB stick or through email. A video of this demo can be found at the MxR Lab website [1]. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1109/VR.2013.6549424 | Virtual Reality |
Keywords | Field | DocType |
avatars,cameras,computer animation,computer games,image reconstruction,image sensors,image texture,mesh generation,3D point cloud,ICT mixed reality lab,IEEE Virtual Reality 2013,SLAM approach,animation,consumer level sensor,device movement tracking,fully rigged character model,graphics,human joint locations,personalized avatar rapid generation,real-time virtual environments,semiautomatic method,sensor RGB camera,simultaneous localization and mapping approach,skinned character model,surface reconstruction,textured character model,user RGB scans,user multiple depth,video game engine,watertight 3D mesh,Virtual environments,avatars,depth sensors,surface reconstruction | Virtual reality,Computer graphics (images),Computer science,Artificial intelligence,RGB color model,Simultaneous localization and mapping,Graphics,Computer vision,Simulation,Animation,Mixed reality,Computer animation,Point cloud | Conference |
ISSN | ISBN | Citations |
1087-8270 | 978-1-4673-4795-2 | 0 |
PageRank | References | Authors |
0.34 | 1 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Evan A. Suma | 1 | 780 | 67.37 |
David M. Krum | 2 | 428 | 37.57 |
Thai Phan | 3 | 22 | 5.05 |
Mark Bolas | 4 | 880 | 89.87 |