Title
Pushing the limits of mechanical turk: qualifying the crowd for video geo-location
Abstract
In this article we review the methods we have developed for finding Mechanical Turk participants for the manual annotation of the geo-location of random videos from the web. We require high quality annotations for this project, as we are attempting to establish a human baseline for future comparison to machine systems. This task is different from a standard Mechanical Turk task in that it is difficult for both humans and machines, whereas a standard Mechanical Turk task is usually easy for humans and difficult or impossible for machines. This article discusses the varied difficulties we encountered while qualifying annotators and the steps that we took to select the individuals most likely to do well at our annotation task in the future.
Year
DOI
Venue
2012
10.1145/2390803.2390815
CrowdMM@ACM Multimedia
Keywords
Field
DocType
high quality annotation,future comparison,standard mechanical turk task,varied difficulty,mechanical turk participant,human baseline,random video,machine system,annotation task,mechanical turk,video geo-location,manual annotation,multimodal,annotation,crowdsourcing
World Wide Web,Annotation,Crowdsourcing,Computer science,Geolocation,Manual annotation,Multimedia
Conference
Citations 
PageRank 
References 
10
0.55
6
Authors
5
Name
Order
Citations
PageRank
Luke Gottlieb1615.79
Jae-Young Choi2783110.19
Pascal Kelm3597.43
Thomas Sikora4100.55
Gerald Friedland5112796.23