Title
Crowdsourcing Multi-label Audio Annotation Tasks with Citizen Scientists.
Abstract
Annotating rich audio data is an essential aspect of training and evaluating machine listening systems. We approach this task in the context of temporally-complex urban soundscapes, which require multiple labels to identify overlapping sound sources. Typically this work is crowdsourced, and previous studies have shown that workers can quickly label audio with binary annotation for single classes. However, this approach can be difficult to scale when multiple passes with different focus classes are required to annotate data with multiple labels. In citizen science, where tasks are often image-based, annotation efforts typically label multiple classes simultaneously in a single pass. This paper describes our data collection on the Zooniverse citizen science platform, comparing the efficiencies of different audio annotation strategies. We compared multiple-pass binary annotation, single-pass multi-label annotation, and a hybrid approach: hierarchical multi-pass multi-label annotation. We discuss our findings, which support using multi-label annotation, with reference to volunteer citizen scientists' motivations.
Year
DOI
Venue
2019
10.1145/3290605.3300522
CHI
Keywords
Field
DocType
audio annotation, citizen science, crowdsourcing
Single pass,Data collection,Soundscape,Annotation,Crowdsourcing,Computer science,Human–computer interaction,Citizen science,Machine listening
Conference
ISBN
Citations 
PageRank 
978-1-4503-5970-2
2
0.42
References 
Authors
0
5
Name
Order
Citations
PageRank
Mark Cartwright152.87
Graham Dove294.57
Ana Elisa Méndez Méndez332.13
Juan Pablo Bello41215108.94
Oded Nov598463.88