Title
Automatic Summary Evaluation without Human Models
Abstract
We present a fully automatic approach for summa- rization evaluation that does not require the creation of human model summaries.1 Our work capitalizes on the fact that a summary contains the most rep- resentative information from the input and so it is reasonable to expect that the distribution of terms in the input and a good summary are similar to each other. To compare the term distributions, we use KL and Jensen-Shannon divergence, cosine similar- ity, as well as unigram and multinomial models of text. Our results on a large scale evaluation from the Text Analysis Conference show that input-summary comparisons can be very effective. They can be used to rank participating systems very similarly to man- ual model-based evaluations (pyramid evaluation) as well as to manual human judgments of summary quality without reference to a model. Our best fea- ture, Jensen-Shannon divergence, leads to a correla- tion as high as 0.9 with manual evaluations.
Year
Venue
Field
2008
TAC
Data mining,Automatic summarization,Text mining,Divergence,Cosine similarity,Computer science,Multinomial distribution,Correlation,Pyramid
DocType
Citations 
PageRank 
Conference
16
0.86
References 
Authors
17
2
Name
Order
Citations
PageRank
Annie Louis144324.78
Ani Nenkova21831109.14