Title
Decoupling Magnitude and Phase Estimation with Deep ResUNet for Music Source Separation.
Abstract
Deep neural network based methods have been successfully applied to music source separation. They typically learn a mapping from a mixture spectrogram to a set of source spectrograms, all with magnitudes only. This approach has several limitations: 1) its incorrect phase reconstruction degrades the performance, 2) it limits the magnitude of masks between 0 and 1 while we observe that 22% of time-frequency bins have ideal ratio mask values of over~1 in a popular dataset, MUSDB18, 3) its potential on very deep architectures is under-explored. Our proposed system is designed to overcome these. First, we propose to estimate phases by estimating complex ideal ratio masks (cIRMs) where we decouple the estimation of cIRMs into magnitude and phase estimations. Second, we extend the separation method to effectively allow the magnitude of the mask to be larger than 1. Finally, we propose a residual UNet architecture with up to 143 layers. Our proposed system achieves a state-of-the-art MSS result on the MUSDB18 dataset, especially, a SDR of 8.98~dB on vocals, outperforming the previous best performance of 7.24~dB. The source code is available at: https://github.com/bytedance/music_source_separation
Year
Venue
DocType
2021
ISMIR
Conference
ISSN
Citations 
PageRank 
International Society for Music Information Retrieval (ISMIR) 2021
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Qiuqiang Kong16818.75
Yin Cao201.69
Haohe Liu302.70
Keunwoo Choi411510.26
Yu-Xuan Wang565032.68