Title
Do we need annotation experts? A case study in celiac disease classification.
Abstract
Inference of clinically-relevant findings from the visual appearance of images has become an essential part of processing pipelines for many problems in medical imaging. Typically, a sufficient amount labeled training data is assumed to be available, provided by domain experts. However, acquisition of this data is usually a time-consuming and expensive endeavor. In this work, we ask the question if, for certain problems, expert knowledge is actually required. In fact, we investigate the impact of letting non-expert volunteers annotate a database of endoscopy images which are then used to assess the absence/presence of celiac disease. Contrary to previous approaches, we are not interested in algorithms that can handle the label noise. Instead, we present compelling empirical evidence that label noise can be compensated by a sufficiently large corpus of training data, labeled by the non-experts.
Year
DOI
Venue
2014
10.1007/978-3-319-10470-6_57
Lecture Notes in Computer Science
Field
DocType
Volume
Computer vision,Ask price,Annotation,Empirical evidence,Crowdsourcing,Medical imaging,Inference,Computer science,Local binary patterns,Artificial intelligence,Visual appearance
Conference
8674
Issue
ISSN
Citations 
Pt 2
0302-9743
7
PageRank 
References 
Authors
0.49
12
5
Name
Order
Citations
PageRank
R Kwitt144835.15
Sebastian Hegenbart2867.10
N Rasiwasia3117334.61
Andreas Vécsei416718.36
Andreas Uhl51958223.07