Title
Improving Source Separation by Explicitly Modeling Dependencies between Sources
Abstract
We propose a new method for training a supervised source separation system that aims to learn the interdependent relationships between all combinations of sources in a mixture. Rather than independently estimating each source from a mix, we reframe the source separation problem as an Orderless Neural Autoregressive Density Estimator (NADE), and estimate each source from both the mix and a random subset of the other sources. We adapt a standard source separation architecture, Demucs, with additional inputs for each individual source, in addition to the input mixture. We randomly mask these input sources during training so that the network learns the conditional dependencies between the sources. By pairing this training method with a blocked Gibbs sampling procedure at inference time, we demonstrate that the network can iteratively improve its separation performance by conditioning a source estimate on its earlier source estimates. Experiments on two source separation datasets show that training a Demucs model with an Orderless NADE approach and using Gibbs sampling (up to 512 steps) at inference time strongly outperforms a Demucs baseline that uses a standard regression loss and direct (one step) estimation of sources.
Year
DOI
Venue
2022
10.1109/ICASSP43922.2022.9747076
ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Keywords
DocType
ISSN
music source separation,orderless NADE,Gibbs sampling
Conference
1520-6149
ISBN
Citations 
PageRank 
978-1-6654-0541-6
0
0.34
References 
Authors
5
5
Name
Order
Citations
PageRank
Ethan Manilow100.34
Curtis Hawthorne200.34
Cheng-Zhi Anna Huang300.34
Bryan Pardo483063.92
Jesse Engel500.34