Title
Identifying top news using crowdsourcing
Abstract
The influential Text REtrieval Conference (TREC) retrieval conference has always relied upon specialist assessors or occasionally participating groups to create relevance judgements for the tracks that it runs. Recently however, crowdsourcing has been championed as a cheap, fast and effective alternative to traditional TREC-like assessments. In 2010, TREC tracks experimented with crowdsourcing for the very first time. In this paper, we report our successful experience in creating relevance assessments for the TREC Blog track 2010 top news stories task using crowdsourcing. In particular, we crowdsourced both real-time newsworthiness assessments for news stories as well as traditional relevance assessments for blog posts. We conclude that crowdsourcing not only appears to be a feasible, but also cheap and fast means to generate relevance assessments. Furthermore, we detail our experiences running the crowdsourced evaluation of the TREC Blog track, discuss the lessons learned, and provide best practices.
Year
DOI
Venue
2013
10.1007/s10791-012-9186-z
Inf. Retr.
Keywords
Field
DocType
top news stories task,crowdsourced evaluation,relevance judgement,traditional relevance assessment,trec blog track,traditional trec-like assessment,blog post,best practice,news story,relevance assessment,crowdsourcing
Data mining,World Wide Web,Best practice,Information retrieval,Computer science,Crowdsourcing,Text Retrieval Conference
Journal
Volume
Issue
ISSN
16
2
1573-7659
Citations 
PageRank 
References 
11
0.88
10
Authors
3
Name
Order
Citations
PageRank
Richard Mccreadie140332.43
Craig Macdonald22588178.50
Iadh Ounis33438234.59