Title
Polyphonic Sound Event and Sound Activity Detection: A Multi-Task Approach
Abstract
Polyphonic Sound Event Detection (SED) in real-world recordings is a challenging task because of the dynamic polyphony level, intensity, and duration of sound events. Current polyphonic SED systems fail to model the temporal structure of sound events explicitly and instead attempt to look at which sound events are present at each audio frame. Consequently, the event-wise detection performance is much lower than the segment-wise detection performance. In this work, we propose a joint model approach to improve the temporal localization of sound events using a multi-task learning setup. The first task predicts which sound events are present at each time frame; we call this branch `Sound Event Detection (SED) model', while the second task predicts if a sound event is present or not at each frame; we call this branch `Sound Activity Detection (SAD) model'. We verify the proposed joint model by comparing it with a separate implementation of both tasks aggregated together from individual task predictions. Our experiments on the URBAN-SED dataset show that the proposed joint model can alleviate False Positive (FP) and False Negative (FN) errors and improve both the segment-wise and the event-wise metrics.
Year
DOI
Venue
2019
10.1109/WASPAA.2019.8937193
2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)
Keywords
Field
DocType
Polyphonic sound event detection,sound activity detection,multi-task learning
Multi-task learning,Time frame,Computer science,Speech recognition,Activity detection,Sound localization,Acoustics,Polyphony,Sound event detection
Conference
ISSN
ISBN
Citations 
1931-1168
978-1-7281-1124-7
0
PageRank 
References 
Authors
0.34
4
3
Name
Order
Citations
PageRank
Arjun Pankajakshan100.68
Helen L. Bear2307.10
Emmanouil Benetos355752.48