Title
Towards Universal Evaluation of Image Annotation Interfaces
Abstract
To guide the design of interactive image annotation systems that generalize to new domains and applications, we need ways to evaluate the capabilities of new annotation tools across a range of different types of image, content, and task domains. In this work, we introduce Corsica, a test harness for image an- notation tools that uses calibration images to evaluate a tool's capabilities on general image properties and task requirements. Corsica is comprised of sets of three key components: 1) synthesized images with visual elements that are not domain- specific, 2) target microtasks that connects the visual elements and tools for evaluation, and 3) ground truth data for each mi- crotask and visual element pair. By introducing a specification for calibration images and microtasks, we aim to create an evolving repository that allows the community to propose new evaluation challenges. Our work aims to help facilitate the robust verification of image annotation tools and techniques.
Year
DOI
Venue
2019
10.1145/3332167.3357122
The Adjunct Publication of the 32nd Annual ACM Symposium on User Interface Software and Technology
Keywords
Field
DocType
crowdsourcing, evaluation, image annotation, tools
Automatic image annotation,Computer science,Human–computer interaction,Multimedia
Conference
ISBN
Citations 
PageRank 
978-1-4503-6817-9
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Andrew M. Vernier100.34
Jean Song2395.68
Edward Sun300.68
Allison Kench400.34
Walter Lasecki583367.19