Title
TED: A Pretrained Unsupervised Summarization Model with Theme Modeling and Denoising
Abstract
Text summarization aims to extract essential information from a piece of text and transform it into a concise version. Existing unsupervised abstractive summarization models use recurrent neural networks framework and ignore abundant unlabeled corpora resources. In order to address these issues, we propose TED, a transformer-based unsupervised summarization system with pretraining on large-scale data. We first leverage the lead bias in news articles to pretrain the model on large-scale corpora. Then, we finetune TED on target domains through theme modeling and a denoising autoencoder to enhance the quality of summaries. Notably, TED outperforms all unsupervised abstractive baselines on NYT, CNN/DM and English Gigaword datasets with various document styles. Further analysis shows that the summaries generated by TED are abstractive and containing even higher proportions of novel tokens than those from supervised models.
Year
DOI
Venue
2020
10.18653/V1/2020.FINDINGS-EMNLP.168
EMNLP
DocType
Volume
Citations 
Conference
2020.findings-emnlp
0
PageRank 
References 
Authors
0.34
24
6
Name
Order
Citations
PageRank
Yang Ziyi100.34
Chenguang Zhu232822.92
Robert Gmyr301.69
Zeng Michael400.34
Xuedong Huang51390283.19
Eric Darve600.34