Title | ||
---|---|---|
Increasing the Reliability of Crowdsourcing Evaluations Using Online Quality Assessment. |
Abstract | ||
---|---|---|
Manual annotations and transcriptions have an ever-increasing importance in areas such as behavioral signal processing, image processing, computer vision, and speech signal processing. Conventionally, this metadata has been collected through manual annotations by experts. With the advent of crowdsourcing services, the scientific community has begun to crowdsource many tasks that researchers deem tedious, but can be easily completed by many human annotators. While crowdsourcing is a cheaper and more efficient approach, the quality of the annotations becomes a limitation in many cases. This paper investigates the use of reference sets with predetermined ground-truth to monitor annotators’ accuracy and fatigue, all in real-time. The reference set includes evaluations that are identical in form to the relevant questions that are collected, so annotators are blind to whether or not they are being graded on performance on a specific question. We explore these ideas on the emotional annotation of the MSP-IMPROV database. We present promising results which suggest that our system is suitable for collecting accurate annotations. |
Year | DOI | Venue |
---|---|---|
2016 | 10.1109/TAFFC.2015.2493525 | IEEE Trans. Affective Computing |
Keywords | Field | DocType |
Videos,Crowdsourcing,Real-time systems,Databases,Speech recognition,Computer vision,Image processing,Signal processing,Behavioral sciences | Metadata,Signal processing,Transcription (linguistics),Annotation,Information retrieval,Computer science,Crowdsourcing,Image processing,Crowdsource,Artificial intelligence,Multimedia,Machine learning | Journal |
Volume | Issue | ISSN |
7 | 4 | 1949-3045 |
Citations | PageRank | References |
19 | 0.74 | 20 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Alec Burmania | 1 | 44 | 1.88 |
S. Parthasarthy | 2 | 60 | 5.25 |
Carlos Busso | 3 | 1616 | 93.04 |