Title
A Scalable Noisy Speech Dataset and Online Subjective Test Framework
Abstract
Background noise is a major source of quality impairments in Voice over Internet Protocol (VoIP) and Public Switched Telephone Network (PSTN) calls. Recent work shows the efficacy of deep learning for noise suppression, but the datasets have been relatively small compared to those used in other domains (e.g., ImageNet) and the associated evaluations have been more focused. In order to better facilitate deep learning research in Speech Enhancement, we present a noisy speech dataset (MS-SNSD) that can scale to arbitrary sizes depending on the number of speakers, noise types, and Speech to Noise Ratio (SNR) levels desired. We show that increasing dataset sizes increases noise suppression performance as expected. In addition, we provide an open-source evaluation methodology to evaluate the results subjectively at scale using crowdsourcing, with a reference algorithm to normalize the results. To demonstrate the dataset and evaluation framework we apply it to several noise suppressors and compare the subjective Mean Opinion Score (MOS) with objective quality measures such as SNR, PESQ, POLQA, and VISQOL and show why MOS is still required. Our subjective MOS evaluation is the first large scale evaluation of Speech Enhancement algorithms that we are aware of.
Year
DOI
Venue
2019
10.21437/Interspeech.2019-3087
INTERSPEECH
DocType
Citations 
PageRank 
Conference
1
0.35
References 
Authors
0
6
Name
Order
Citations
PageRank
Chandan K. Reddy180373.50
Ebrahim Beyrami210.69
Jamie Pool320.72
Ross Cutler431.41
Sriram Srinivasan537927.92
Johannes Gehrke6133621055.06