Abstract | ||
---|---|---|
The rising interest in single-channel multi-speaker speech separation sparked development of End-to-End (E2E) approaches to multi-speaker speech recognition. However, up until now, state-of-the-art neural network-based time domain source separation has not yet been combined with E2E speech recognition. We here demonstrate how to combine a separation module based on a Convolutional Time domain Audio Separation Network (Conv-TasNet) with an E2E speech recognizer and how to train such a model jointly by distributing it over multiple GPUs or by approximating truncated back-propagation for the convolutional front-end. To put this work into perspective and illustrate the complexity of the design space, we provide a compact overview of single-channel multi-speaker recognition systems. Our experiments show a word error rate of 11.0% on WSJ0-2mix and indicate that our joint time domain model can yield substantial improvements over cascade DNN-HMM and monolithic E2E frequency domain systems proposed so far. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/ICASSP40776.2020.9053461 | ICASSP |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Thilo von Neumann | 1 | 6 | 2.57 |
Keisuke Kinoshita | 2 | 494 | 54.81 |
Lukas Drude | 3 | 95 | 11.10 |
Boeddeker Christoph | 4 | 3 | 3.84 |
Marc Delcroix | 5 | 699 | 62.07 |
Tomohiro Nakatani | 6 | 1327 | 139.18 |
Reinhold Haeb-Umbach | 7 | 1487 | 211.71 |