Title
Narratives in Crowdsourced Evaluation of Visualizations: A Double-Edged Sword?
Abstract
We explore the effects of providing task context when evaluating visualization tools using crowdsourcing. We gave crowdsource workers i) abstract information visualization tasks without any context, ii) tasks where we added semantics to the dataset, and iii) tasks with two types of backstory narratives: an analytic narrative and a decision-making narrative. Contrary to our expectations, we did not find evidence that adding data semantics increases accuracy, and further found that our backstory narratives can even decrease accuracy. Adding dataset semantics can however increase attention and provide subjective benefits in terms of confidence, perceived easiness, task enjoyability and perceived usefulness of the visualization. Nevertheless, our backstory narratives did not appear to provide additional subjective benefits. These preliminary findings suggest that narratives may have complex and unanticipated effects, calling for more studies in this area.
Year
DOI
Venue
2017
10.1145/3025453.3025870
CHI
Keywords
Field
DocType
crowdsourcing, evaluation, instructions, narrative, information visualization, decision making
World Wide Web,Information visualization,Visualization,Computer science,Crowdsourcing,Crowdsource,Narrative,Human–computer interaction,Data semantics,Multimedia,Semantics,SWORD
Conference
Citations 
PageRank 
References 
5
0.39
29
Authors
3
Name
Order
Citations
PageRank
Evanthia Dimara1546.68
Anastasia Bezerianos267437.75
Pierre Dragicevic3163973.69