Title
DISENTANGLED SPEAKER AND LANGUAGE REPRESENTATIONS USING MUTUAL INFORMATION MINIMIZATION AND DOMAIN ADAPTATION FOR CROSS-LINGUAL TTS
Abstract
We propose a method for obtaining disentangled speaker and language representations via mutual information minimization and domain adaptation for cross-lingual text-to-speech (TTS) synthesis. The proposed method extracts speaker and language embeddings from acoustic features by a speaker encoder and a language encoder. Then the proposed method applies domain adaptation on the two embeddings to obtain language-invariant speaker embedding and speaker-invariant language embedding. To get more disentangled representations, the proposed method further uses mutual information minimization between the two embeddings to remove entangled information within each embedding. Disentangled representations of speaker and language are critical for cross-lingual TTS synthesis since entangled representations make it difficult to maintain speaker identity information when changing the language representation and consequently causes performance degradation. We evaluate the proposed method using English and Japanese multi-speaker datasets with a total of 207 speakers. Experimental result demonstrates that the proposed method significantly improves the naturalness and speaker similarity of both intra-lingual and cross-lingual TTS synthesis. Furthermore, we show that the proposed method has a good capability of maintaining the speaker identity between languages.
Year
DOI
Venue
2021
10.1109/ICASSP39728.2021.9414226
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)
Keywords
DocType
Citations 
Text-to-speech synthesis, cross-lingual, domain adaptation, mutual information, speaker embedding
Conference
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Detai Xin100.34
Tatsuya Komatsu201.69
Shinnosuke Takamichi37522.08
Saruwatari, H.465290.81