Title
Do People and Neural Nets Pay Attention to the Same Words: Studying Eye-tracking Data for Non-factoid QA Evaluation
Abstract
We investigated how users evaluate passage-length answers for non-factoid questions. We conduct a study where answers were presented to users, sometimes shown with automatic word highlighting. Users were tasked with evaluating answer quality, correctness, completeness, and conciseness. Words in the answer were also annotated, both explicitly through user mark up and implicitly through user gaze data obtained from eye-tracking. Our results show that the correctness of an answer strongly depends on its completeness, conciseness is less important. Analysis of the annotated words showed correct and incorrect answers were assessed differently. Automatic highlighting helped users to evaluate answers quicker while maintaining accuracy, particularly when highlighting was similar to annotation. We fine-tuned a BERT model on a non-factoid QA task to examine if the model attends to words similar to those annotated. Similarity was found, consequently, we propose a method to exploit the BERT attention map to generate suggestions that simulate eye gaze during user evaluation.
Year
DOI
Venue
2020
10.1145/3340531.3412043
CIKM '20: The 29th ACM International Conference on Information and Knowledge Management Virtual Event Ireland October, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-6859-9
1
PageRank 
References 
Authors
0.36
0
6
Name
Order
Citations
PageRank
Valeria Bolotova110.36
Vladislav Blinov221.05
Yukun Zheng3314.02
W. Bruce Croft4178122796.94
Falk Scholer5124493.27
Mark Sanderson63751341.56