Title
Discriminative Learning for Monaural Speech Separation Using Deep Embedding Features
Abstract
Deep clustering (DC) and utterance-level permutation invariant training (uPIT) have been demonstrated promising for speaker-independent speech separation. DC is usually formulated as two-step processes: embedding learning and embedding clustering, which results in complex separation pipelines and a huge obstacle in directly optimizing the actual separation objectives. As for uPIT, it only minimizes the chosen permutation with the lowest mean square error, doesn't discriminate it with other permutations. In this paper, we propose a discriminative learning method for speaker-independent speech separation using deep embedding features. Firstly, a DC network is trained to extract deep embedding features, which contain each source's information and have an advantage in discriminating each target speakers. Then these features are used as the input for uPIT to directly separate the different sources. Finally, uPIT and DC are jointly trained, which directly optimizes the actual separation objectives. Moreover, in order to maximize the distance of each permutation, the discriminative learning is applied to fine tuning the whole model. Our experiments are conducted on WSJ0-2mix dataset. Experimental results show that the proposed models achieve better performances than DC and uPIT for speaker-independent speech separation.
Year
DOI
Venue
2019
10.21437/Interspeech.2019-1940
INTERSPEECH
DocType
Citations 
PageRank 
Conference
2
0.40
References 
Authors
0
5
Name
Order
Citations
PageRank
Cunhang Fan123.79
Bin Liu219135.02
Jianhua Tao3848138.00
Jiangyan Yi41917.99
Zhengqi Wen58624.41