Title
Real-Time Speech-Driven Animation Of Expressive Talking Faces
Abstract
In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.
Year
DOI
Venue
2011
10.1080/03081079.2010.544896
INTERNATIONAL JOURNAL OF GENERAL SYSTEMS
Keywords
DocType
Volume
audio-visual mapping, speech-driven facial animation, facial action units, speech emotion recognition
Journal
40
Issue
ISSN
Citations 
4
0308-1079
1
PageRank 
References 
Authors
0.36
35
4
Name
Order
Citations
PageRank
Jia Liu1503.81
Mingyu You216016.22
Chun Chen34727246.28
Mingli Song4164698.10