Title
Automatically evaluating content selection in summarization without human models
Abstract
We present a fully automatic method for content selection evaluation in summarization that does not require the creation of human model summaries. Our work capitalizes on the assumption that the distribution of words in the input and an informative summary of that input should be similar to each other. Results on a large scale evaluation from the Text Analysis Conference show that input-summary comparisons are very effective for the evaluation of content selection. Our automatic methods rank participating systems similarly to manual model-based pyramid evaluation and to manual human judgments of responsiveness. The best feature, Jensen-Shannon divergence, leads to a correlation as high as 0.88 with manual pyramid and 0.73 with responsiveness evaluations.
Year
Venue
Keywords
2009
EMNLP
manual pyramid,large scale evaluation,content selection,manual model-based pyramid evaluation,human model summary,manual human judgment,responsiveness evaluation,automatic method,jensen-shannon divergence,content selection evaluation,text analysis,jensen shannon divergence
Field
DocType
Volume
Automatic summarization,Text mining,Information retrieval,Computer science,Correlation,Artificial intelligence,Natural language processing,Pyramid,Machine learning
Conference
D09-1
Citations 
PageRank 
References 
47
1.70
11
Authors
2
Name
Order
Citations
PageRank
Annie Louis144324.78
Ani Nenkova21831109.14