Title
Ambiguity Helps: Classification With Disagreements In Crowdsourced Annotations
Abstract
Imagine we show an image to a person and ask her/him to decide whether the scene in the image is warm or not warm, and whether it is easy or not to spot a squirrel in the image. For exactly the same image, the answers to those questions are likely to differ from person to person. This is because the task is inherently ambiguous. Such an ambiguous, therefore challenging, task is pushing the boundary of computer vision in showing what can and can not be learned from visual data. Crowdsourcing has been invaluable for collecting annotations. This is particularly so for a task that goes beyond a clear-cut dichotomy as multiple human judgments per image are needed to reach a consensus. This paper makes conceptual and technical contributions. On the conceptual side, we define disagreements among annotators as privileged information about the data instance. On the technical side, we propose a framework to incorporate annotation disagreements into the classifiers. The proposed framework is simple, relatively fast, and outperforms classifiers that do not take into account the disagreements, especially if tested on high confidence annotations.
Year
DOI
Venue
2016
10.1109/CVPR.2016.241
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)
Field
DocType
Volume
Computer vision,Annotation,Ask price,Crowdsourcing,Computer science,Artificial intelligence,Ambiguity
Conference
2016
Issue
ISSN
Citations 
1
1063-6919
0
PageRank 
References 
Authors
0.34
8
4
Name
Order
Citations
PageRank
Viktoriia Sharmanska11117.10
Daniel Hernández-Lobato244026.10
José Miguel Hernández-Lobato361349.06
Novi Quadrianto435921.71