Title
Generating Representative Headlines for News Stories
Abstract
Millions of news articles are published online every day, which can be overwhelming for readers to follow. Grouping articles that are reporting the same event into news stories is a common way of assisting readers in their news consumption. However, it remains a challenging research problem to efficiently and effectively generate a representative headline for each story. Automatic summarization of a document set has been studied for decades, while few studies have focused on generating representative headlines for a set of articles. Unlike summaries, which aim to capture most information with least redundancy, headlines aim to capture information jointly shared by the story articles in short length and exclude information specific to each individual article. In this work, we study the problem of generating representative headlines for news stories. We develop a distant supervision approach to train large-scale generation models without any human annotation. The proposed approach centers on two technical components. First, we propose a multi-level pre-training framework that incorporates massive unlabeled corpus with different quality-vs.-quantity balance at different levels. We show that models trained within the multi-level pre-training framework outperform those only trained with human-curated corpus. Second, we propose a novel self-voting-based article attention layer to extract salient information shared by multiple articles. We show that models that incorporate this attention layer are robust to potential noises in news stories and outperform existing baselines on both clean and noisy datasets. We further enhance our model by incorporating human labels, and show that our distant supervision approach significantly reduces the demand on labeled data. Finally, to serve the research community, we publish the first manually curated benchmark dataset on headline generation for news stories, NewSHead, which contains 367K stories (each with 3-5 articles), 6.5 times larger than the current largest multi-document summarization dataset.
Year
DOI
Venue
2020
10.1145/3366423.3380247
WWW '20: The Web Conference 2020 Taipei Taiwan April, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-7023-3
1
PageRank 
References 
Authors
0.41
0
10
Name
Order
Citations
PageRank
Xiaotao Gu1284.76
Yuning Mao211.09
Jiawei Han3430853824.48
Jialu Liu449760.12
You Wu5327.73
Cong Yu6136671.75
Daniel Finnie710.41
Hongkun Yu8353.22
Jiaqi Zhai910.41
Nicholas Zukoski1010.41