Title
SPEAKER AND DIRECTION INFERRED DUAL-CHANNEL SPEECH SEPARATION
Abstract
Most speech separation methods, trying to separate all channel sources simultaneously, are still far from having enough generalization capabilities for real scenarios where the number of input sounds is usually uncertain and even dynamic. In this work, we employ ideas from auditory attention with two ears and propose a speaker and direction inferred speech separation network (dubbed SDNet) to solve the cocktail party problem. Specifically, our SDNet first parses out the respective perceptual representations with their speaker and direction characteristics from the mixture of the scene in a sequential manner. Then, the perceptual representations are utilized to attend to each corresponding speech. Our model generates more precise perceptual representations with the help of spatial features and successfully deals with the problem of the unknown number of sources and the selection of outputs. The experiments on standard fully-overlapped speech separation benchmarks, WSJ0-2mix, WSJ0-3mix, and WSJ0-2&3mix, show the effectiveness, and our method achieves SDR improvements of 25.31 dB, 17.26 dB, and 21.56 dB under anechoic settings. Our codes will be released at https://github.com/aispeech-lab/SDNet.
Year
DOI
Venue
2021
10.1109/ICASSP39728.2021.9413818
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)
Keywords
DocType
Citations 
dual-channel speech separation, speaker and direction-inferred separation, cocktail party problem
Conference
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Chenxing Li1146.76
Jiaming Xu228435.34
Nima Mesgarani325622.43
Bo Xu411127.31