Title
Truth Serums for Massively Crowdsourced Evaluation Tasks
Abstract
Incentivizing effort and eliciting truthful responses from agents in the absence of verifiability is a major challenge faced while crowdsourcing many types of evaluation tasks like labeling images, grading assignments in online courses, etc. In this paper, we propose new reward mechanisms for such settings that, unlike most previously studied mechanisms, impose minimal assumptions on the structure and knowledge of the underlying generating model, can account for heterogeneity in the agents' abilities, require no extraneous elicitation from them, and furthermore allow their beliefs to be (almost) arbitrary. Moreover, these mechanisms have the simple and intuitive structure of output agreement mechanisms, which, despite not incentivizing truthful behavior, have nevertheless been quite popular in practice. We achieve this by leveraging a typical characteristic of many of these settings, which is the existence of a large number of similar tasks.
Year
Venue
Field
2015
CoRR
Mathematical optimization,Grading (education),Computer science,Simulation,Crowdsourcing,Human–computer interaction
DocType
Volume
Citations 
Journal
abs/1507.07045
8
PageRank 
References 
Authors
0.56
9
5
Name
Order
Citations
PageRank
Vijay Kamble1507.19
Nihar B. Shah2120277.17
david marn3111.33
Abhay Parekh4969.33
Kannan Ramchandran594011029.57