Title | ||
---|---|---|
Intonational Phrase Break Prediction For Text-To-Speech Synthesis Using Dependency Relations |
Abstract | ||
---|---|---|
Intonational phrase (IP) break prediction is an important aspect of front-end analysis in a text-to-speech system. Standard approaches for intonational phrase break prediction rely on the use of linguistic rules or more recently, lexicalized data-driven models. Linguistic rules are not robust while data-driven models based on lexical identity do not generalize across domains. To overcome these challenges, in this paper, we explore the use of syntactic features to predict intonational phrase breaks. On a test set of over 40 thousand words, while a lexically driven IP break prediction model yields an F-score of 0.82, a non-lexicalized model that uses part-of-speech tags and dependency relations achieves an F-score of 0.81 with added feature of being more portable across domains. In this work, we also examine the effect of contextual information on prediction performance. Our evaluation shows that using a three-token left context in a POS-tag based model results in only a 2% drop in recall compared to a model that uses both a left and right context, which suggests the viability of using such a model for incremental text-to-speech system. |
Year | Venue | Keywords |
---|---|---|
2015 | 2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP) | Intonational phrase, phrase breaks, IP prediction, prosody, text-analysis |
Field | DocType | ISSN |
Speech synthesis,Computer science,Phrase,Context model,Speech recognition,Left and right,Text to speech synthesis,Artificial intelligence,Natural language processing,Syntax,Recall,Test set | Conference | 1520-6149 |
Citations | PageRank | References |
4 | 0.41 | 4 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Taniya Mishra | 1 | 89 | 11.66 |
Yeon-jun Kim | 2 | 4 | 0.41 |
Srinivas Bangalore | 3 | 1319 | 157.37 |