Title
Articulatory Control Of Hmm-Based Parametric Speech Synthesis Driven By Phonetic Knowledge
Abstract
This paper presents a method to control the characteristics of synthetic speech flexibly by integrating articulatory features into a Hidden Markov Model (HMM)-based parametric speech synthesis system. In contrast to model adaptation and interpolation approaches for speaking style control, this method is driven by phonetic knowledge, and target speech samples are not required. The joint distribution of parallel acoustic and articulatory features considering cross-stream feature dependency is estimated. At synthesis time, acoustic and articulatory features are generated simultaneously based on the maximum-likelihood criterion. The synthetic speech can be controlled flexibly by modifying the generated articulatory features according to arbitrary phonetic rules in the parameter generation process. Our experiments show that the proposed method is effective in both changing the overall character of synthesized speech and in controlling the quality of a specific vowel.
Year
Venue
Keywords
2008
INTERSPEECH 2008: 9TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2008, VOLS 1-5
speech synthesis, hidden Markov model, articulatory features, phonetic knowledge
Field
DocType
Citations 
Speech synthesis,Joint probability distribution,Pattern recognition,Computer science,Interpolation,Speech recognition,Parametric statistics,Artificial intelligence,Vowel,Hidden Markov model,Speaking style,Feature dependency
Conference
16
PageRank 
References 
Authors
1.03
10
4
Name
Order
Citations
PageRank
Zhen-Hua Ling185083.08
Korin Richmond253146.14
junichi yamagishi31906145.51
Ren-Hua Wang434441.36