Title
Near-videorealistic synthetic visual speech using non-rigid appearance models.
Abstract
In this paper we present work towards videorealistic synthetic visual speech using non-rigid appearance models. These models are used to track a talking face enunciating a set of training sentences. The resultant parameter trajectories are used in a concatenative synthesis scheme, where samples of original data are extracted from a corpus and concatenated to form new unseen sequences. Here we explore the effect on the synthesiser output of blending several synthesis units considered similar to the desired unit. We present preliminary subjective and objective results used to judge the realism of the system.
Year
DOI
Venue
2003
10.1109/ICASSP.2003.1200092
ICASSP
Keywords
Field
DocType
computer animation,image sequences,solid modelling,speech processing,video signal processing,computer graphics,computer vision,concatenative synthesis scheme,near-videorealistic synthetic visual speech,nonrigid appearance models,parameter trajectories,speech processing,talking face,training sentences,video sequences,visual speech synthesiser
Speech processing,Concatenative synthesis,Speech synthesis,Pattern recognition,Computer science,Speech recognition,Artificial intelligence,Computer facial animation,Face detection,Hidden Markov model,Computer animation,Computer graphics
Conference
Volume
Citations 
PageRank 
5
6
0.64
References 
Authors
10
4
Name
Order
Citations
PageRank
Barry-John Theobald133225.39
Gavin C. Cawley290259.96
Iain Matthews34900253.61
J.A. Bangham448446.38