Title
PATNET : A PHONEME-LEVEL AUTOREGRESSIVE TRANSFORMER NETWORK FOR SPEECH SYNTHESIS
Abstract
Aiming at efficiently predicting acoustic features with high naturalness and robustness, this paper proposes PATNet, a neural acoustic model for speech synthesis using phoneme-level autoregression. PATNet accepts phoneme sequences as input and is built based on Transformer structure. PATNet adopts a duration model instead of attention mechanism for sequence alignment. The decoder of PATNet predicts multi-frame spectra within one phoneme in parallel given the predicted spectra of previous phonemes. Such phoneme-level autoregression enables PATNet to achieve higher inference efficiency than the models with frame-level autoregression, such as Transformer-TTS, and improves the robustness of acoustic feature prediction by utilizing phoneme boundaries explicitly. Experimental results show that the speech synthesized by PATNet obtained lower character error rate (CER) than Tacotron, Transfomer-TTS and FastSpeech when evaluated by a speech recognition engine. Besides, PATNet achieved 10 times faster inference speed than TransformerTTS and significantly better naturalness than FastSpeech.
Year
DOI
Venue
2021
10.1109/ICASSP39728.2021.9413658
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)
Keywords
DocType
Citations 
speech synthesis, sequence-to-sequence, Transformer, phoneme-level autoregression
Conference
1
PageRank 
References 
Authors
0.37
0
5
Name
Order
Citations
PageRank
Shiming Wang112.40
Zhen-Hua Ling285083.08
Ruibo Fu315.11
Jiangyan Yi41917.99
Jianhua Tao5848138.00