Title
Can The Crowd Identify Misinformation Objectively? The Effects of Judgment Scale and Assessor's Background
Abstract
Truthfulness judgments are a fundamental step in the process of fighting misinformation, as they are crucial to train and evaluate classifiers that automatically distinguish true and false statements. Usually such judgments are made by experts, like journalists for political statements or medical doctors for medical statements. In this paper, we follow a different approach and rely on (non-expert) crowd workers. This of course leads to the following research question: Can crowdsourcing be reliably used to assess the truthfulness of information and to create large-scale labeled collections for information credibility systems? To address this issue, we present the results of an extensive study based on crowdsourcing: we collect thousands of truthfulness assessments over two datasets, and we compare expert judgments with crowd judgments, expressed on scales with various granularity levels. We also measure the political bias and the cognitive background of the workers, and quantify their effect on the reliability of the data provided by the crowd.
Year
DOI
Venue
2020
10.1145/3397271.3401112
SIGIR '20: The 43rd International ACM SIGIR conference on research and development in Information Retrieval Virtual Event China July, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-8016-4
0
PageRank 
References 
Authors
0.34
32
6
Name
Order
Citations
PageRank
Kevin Roitero13013.74
Michael Soprano212.72
Fan Shaoyang300.34
Damiano Spina434729.96
Stefano Mizzaro586285.52
Gianluca Demartini674454.56