Abstract | ||
---|---|---|
In this paper, we present a feature-based approach to cloning facial expressions from an input face model to an output model, using predefined source key-models and the corresponding target key-models. Adopting a scattered data interpolation technique, our approach consists of two parts: analysis of face key-models and synthesis of facial expressions. In the analysis part carried out once at the beginning, key-models are segmented automatically into five regions, each containing one of five facial features, that is, eyes, cheeks, and the mouth, which give rise to five sets of source key-shapes and the corresponding sets of target key-shapes. Using the key-shapes of each source feature, those of the corresponding target feature are parameterized. In the synthesis part, given a sequence of face models comprising an input animation, five output features are obtained separately by blending their own target key-shapes. These separately produced features are combined to synthesize the output face model at each frame. Our feature-based approach enables cloning of diverse expressions including asymmetric ones convincingly with a small number of face key-models while exhibiting an on-line, real-time performance. Copyright (c) 2005 John Wiley & Sons, Ltd. |
Year | DOI | Venue |
---|---|---|
2005 | 10.1002/cav.81 | COMPUTER ANIMATION AND VIRTUAL WORLDS |
Keywords | Field | DocType |
facial animation, virtual humans and avatars, motion retargeting, emotions and personality | Computer vision,Face hallucination,Parameterized complexity,Expression (mathematics),Computer science,Interpolation,Facial expression,Artificial intelligence,Animation,Computer facial animation,Feature based | Journal |
Volume | Issue | ISSN |
16 | 3-4 | 1546-4261 |
Citations | PageRank | References |
6 | 0.55 | 14 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bongcheol Park | 1 | 6 | 0.55 |
Heejin Chung | 2 | 7 | 0.91 |
Tomoyuki Nishita | 3 | 2062 | 306.43 |
Sung Yong Shin | 4 | 1904 | 168.33 |