Title
RetrieverTTS: Modeling Decomposed Factors for Text-Based Speech Insertion.
Abstract
This paper proposes a new "decompose-and-edit" paradigm for the text-based speech insertion task that facilitates arbitrary-length speech insertion and even full sentence generation. In the proposed paradigm, global and local factors in speech are explicitly decomposed and separately manipulated to achieve high speaker similarity and continuous prosody. Specifically, we proposed to represent the global factors by multiple tokens, which are extracted by cross-attention operation and then injected back by link-attention operation. Due to the rich representation of global factors, we manage to achieve high speaker similarity in a zero-shot manner. In addition, we introduce a prosody smoothing task to make the local prosody factor context-aware and therefore achieve satisfactory prosody continuity. We further achieve high voice quality with an adversarial training stage. In the subjective test, our method achieves state-of-the-art performance in both naturalness and similarity. Audio samples can be found at https://ydcustc.github.io/retrieverTTS-demo/.
Year
DOI
Venue
2022
10.21437/Interspeech.2022-245
Conference of the International Speech Communication Association (INTERSPEECH)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
9
Name
Order
Citations
PageRank
Dacheng Yin101.01
Chuanxin Tang202.03
Yanqing Liu301.35
Xiaoqiang Wang401.35
Zhiyuan Zhao500.68
Yucheng Zhao602.03
Zhiwei Xiong724446.90
Sheng Zhao8249.16
chong luo969647.36