Title
Two-Stage Multi-Target Joint Learning For Monaural Speech Separation
Abstract
Recently, supervised speech separation has been extensively studied and shown considerable promise. Due to the temporal continuity of speech, speech auditory features and separation targets present prominent spectro-temporal structures and strong correlations over the time-frequency (T-F) domain, which can be exploited for speech separation. However, many supervised speech separation methods independently model each T-F unit with only one target and much ignore these useful information. In this paper, we propose a two-stage multi-target joint learning method to jointly model the related speech separation targets at the frame level. Systematic experiments show that the proposed approach consistently achieves better separation and generalization performances in the low signal-to-noise ratio(SNR) conditions.
Year
Venue
Keywords
2015
16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5
speech separation, multi-target learning, computational auditory scene analysis (CASA)
Field
DocType
Citations 
Temporal continuity,Pattern recognition,Computer science,Speech recognition,Artificial intelligence,Monaural
Conference
1
PageRank 
References 
Authors
0.36
11
7
Name
Order
Citations
PageRank
Shuai Nie1408.30
Shan Liang2208.52
Wei Xue331.39
Xueliang Zhang48019.41
Wenju Liu521439.32
Like Dong631.09
Hong Yang731.09