Title
What's the Issue Here?: Task-based Evaluation of Reader Comment Summarization Systems.
Abstract
Automatic summarization of reader comments in on-line news is an extremely challenging task and a capability for which there is a clear need. Work to date has focussed on producing extractive summaries using well-known techniques imported from other areas of language processing. But are extractive summaries of comments what users really want? Do they support users in performing the sorts of tasks they are likely to want to perform with reader comments? In this paper we address these questions by doing three things. First, we offer a specification of one possible summary type for reader comment, based on an analysis of reader comment in terms of issues and viewpoints. Second, we define a task-based evaluation framework for reader comment summarization that allows summarization systems to be assessed in terms of how well they support users in a time-limited task of identifying issues and characterising opinion on issues in comments. Third, we describe a pilot evaluation in which we used the task-based evaluation framework to evaluate a prototype reader comment clustering and summarization system, demonstrating the viability of the evaluation framework and illustrating the sorts of insight such an evaluation affords.
Year
Venue
Keywords
2016
LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION
Reader comment summarization,task-based evaluation,social media argumentation
Field
DocType
Citations 
Multi-document summarization,Automatic summarization,Information retrieval,Viewpoints,Computer science,Natural language processing,Artificial intelligence,Cluster analysis
Conference
3
PageRank 
References 
Authors
0.42
5
8
Name
Order
Citations
PageRank
Emma Barker1356.86
Monica Lestari Paramita220713.65
Adam Funk331417.90
Emina Kurtic4323.48
Ahmet Aker526730.75
Jonathan Foster692.25
Mark Hepple770275.09
Robert J. Gaizauskas8985107.95