Title
Evaluation of HMM-based visual laughter synthesis
Abstract
In this paper we apply speaker-dependent training of Hidden Markov Models (HMMs) to audio and visual laughter synthesis separately. The two modalities are synthesized with a forced durations approach and are then combined together to render audio-visual laughter on a 3D avatar. This paper focuses on visual synthesis of laughter and its perceptive evaluation when combined with synthesized audio laughter. Previous work on audio and visual synthesis has been successfully applied to speech. The extrapolation to audio laughter synthesis has already been done. This paper shows that it is possible to extrapolate to visual laughter synthesis as well.
Year
DOI
Venue
2014
10.1109/ICASSP.2014.6854469
Acoustics, Speech and Signal Processing
Keywords
Field
DocType
audio-visual systems,avatars,extrapolation,hidden markov models,speech synthesis,3d avatar,hmm based visual laughter synthesis evaluation,audio-visual laughter synthesis,hidden markov model,speaker-dependent training,audio,hmm,laughter,synthesis,visual,visualization,face,databases,speech,pipelines
Laughter,Modalities,Computer science,Speech recognition,Natural language processing,Artificial intelligence,Hidden Markov model,Avatar
Conference
ISSN
Citations 
PageRank 
1520-6149
8
0.61
References 
Authors
16
4
Name
Order
Citations
PageRank
Hüseyin Çakmak1548.05
Jérôme Urbain214612.20
Joëlle Tilmanne310712.24
T. Dutoit431330.47