Title
Generating unified model for dressed virtual humans
Abstract
Scenes with crowds of dressed virtual humans are getting more attention and importance in 3D games and virtual reality applications. Crowd scenes, which include huge amount of virtual humans, require complex computation for animation and rendering. In this research, new methods are proposed to generate efficient virtual human models by unifying a body and a garment into an animatable model which has skinning parameters for the common skeleton- driven animation. The generated model has controlled complexity in geometry and semantic information. The unified model is constructed by using the correspondence between the body and the garment meshes. To establish the correspondence, two opposite optimization methods are proposed and compared: the first is to fit the body onto the garment and the second is to fit the garment onto the body. The innovative aspect of our method lies in supporting multiple correspondences between body and cloth parts. This enables us to handle the skirt model which is difficult to be processed by using previous works, due to its topological differences to the body model.
Year
DOI
Venue
2005
10.1007/s00371-005-0339-6
The Visual Computer
Keywords
Field
DocType
dressed virtual human modeling · shape fitting · multiple correspondence mapping,virtual reality,virtual human,unified model
Crowds,Computer vision,Skinning,Virtual reality,Polygon mesh,Computer graphics (images),Computer science,Animation,Artificial intelligence,Virtual actor,Rendering (computer graphics),Computation
Journal
Volume
Issue
ISSN
21
8-10
1432-2315
Citations 
PageRank 
References 
6
0.50
16
Authors
4
Name
Order
Citations
PageRank
Seungwoo Oh1372.79
HyungSeok Kim211622.09
Nadia Magnenat-Thalmann35119659.15
KwangYun Wohn430942.24