Title
Forward–Backward Decoding Sequence for Regularizing End-to-End TTS
Abstract
Neural end-to-end TTS such as Tacotron like network can generate very high-quality synthesized speech, and even close to human recording for similar domain text. However, it performs unsatisfactory when scaling it to some challenging test sets. One concern is that the encoder-decoder with attention-based network adopts autoregressive generative sequence model with the limitation of “exposure bias”: errors made early could be quickly amplified, harming subsequent sequence generation. To address this issue, we propose two novel methods, which aim at predicting future by improving the agreement between forward and backward decoding sequence. The first one denoted as MRBA is achieved by adding divergence regularization terms to model training objective to maximize the agreement between two directional models, namely L2R which generates targets from left-to-right and R2L which generates targets from right-to-left. While the second one denoted as BDR operates on decoder-level and exploits the future information during decoding. By introducing regularization term into the training objective of forward-backward decoders, the forward-decoder's hidden states are forced to be close to the backward-decoder's. Thus, the hidden representations of a unidirectional decoder are encouraged to embed some useful information about the future. Moreover, in order to make forward and backward decoding to improve each other in an interactive process, a joint training method is designed. Experimental results on both English and Mandarin dataset show that our proposed methods especially the second one BDR, lead to a significantly improvement on both robustness and overall naturalness, as achieving obvious preference advantages in a challenging test, and achieving state-of-the-art performance outperforming baseline “the revised version of Tacotron2” with a gap of 0.13 and 0.12 for English and Mandarin in MOS, respectively on a general test.
Year
DOI
Venue
2019
10.1109/TASLP.2019.2935807
IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)
Keywords
Field
DocType
Decoding,Training,Speech processing,Linguistics,Acoustics,Speech recognition
Speech processing,Autoregressive model,Computer science,End-to-end principle,Naturalness,Exploit,Speech recognition,Robustness (computer science),Regularization (mathematics),Decoding methods
Journal
Volume
Issue
ISSN
27
12
2329-9290
Citations 
PageRank 
References 
1
0.37
4
Authors
4
Name
Order
Citations
PageRank
Yibin Zheng13815.13
Jianhua Tao2848138.00
Zhengqi Wen38624.41
Jiangyan Yi41917.99