Title
Leveraging Lead Bias for Zero-shot Abstractive News Summarization
Abstract
ABSTRACTA typical journalistic convention in news articles is to deliver the most salient information in the beginning, also known as the lead bias. While this phenomenon can be exploited in generating a summary, it has a detrimental effect on teaching a model to discriminate and extract important information in general. We propose that this lead bias can be leveraged in our favor in a simple and effective way to pre-train abstractive news summarization models on large-scale unlabeled news corpora: predicting the leading sentences using the rest of an article. We collect a massive news corpus and conduct data cleaning and filtering via statistical analysis. We then apply self-supervised pre-training on this dataset to existing generation models BART and T5 for domain adaptation. Via extensive experiments on six benchmark datasets, we show that this approach can dramatically improve the summarization quality and achieve state-of-the-art results for zero-shot news summarization without any fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART increases 13.7% after the lead-bias pre-training. We deploy the model in Microsoft News and provide public APIs as well as a demo website for multi-lingual news summarization.
Year
DOI
Venue
2021
10.1145/3404835.3462846
Research and Development in Information Retrieval
Keywords
DocType
Citations 
lead bias, zero-shot summarization, pre-training, domain adaptation
Conference
1
PageRank 
References 
Authors
0.37
2
5
Name
Order
Citations
PageRank
Chenguang Zhu132822.92
Han-Shuo Ye2162.88
Robert Gmyr310.37
Michael Zeng446.85
Xuedong Huang51390283.19