Title
Rating Mechanisms for Sustainability of Crowdsourcing Platforms
Abstract
Crowdsourcing leverages the diverse skill sets of large collections of individual contributors to solve problems and execute projects, where contributors may vary significantly in experience, expertise, and interest in completing tasks. Hence, to ensure the satisfaction of its task requesters, most existing crowdsourcing platforms focus primarily on supervising contributors' behavior. This lopsided approach to supervision negatively impacts contributor engagement and platform sustainability. In this paper, we introduce rating mechanisms to evaluate requesters' behavior, such that the health and sustainability of crowdsourcing platform can be improved. We build a game theoretical model to systematically account for the different goals of requesters, contributors, and platform, and their interactions. On the basis of this model, we focus on a specific application, in which we aim to design a rating policy that incentivizes requesters to engage lessexperienced contributors. Considering the hardness of the problem, we develop a time efficient heuristic algorithm with theoretical bound analysis. Finally, we conduct a user study in Amazon Mechanical Turk (MTurk) to validate the central hypothesis of the model. We provide a simulation based on 3 million task records extracted from MTurk demonstrating that our rating policy can appreciably motivate requesters to hire less-experienced contributors.
Year
DOI
Venue
2019
10.1145/3357384.3357933
Proceedings of the 28th ACM International Conference on Information and Knowledge Management
Keywords
Field
DocType
bi-level programming, crowdsourcing, rating policy
Data science,Information retrieval,Crowdsourcing,Computer science,Sustainability
Conference
ISBN
Citations 
PageRank 
978-1-4503-6976-3
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Chenxi Qiu100.34
Anna Cinzia Squicciarini21301106.30
Sarah Rajtmajer341.74