Title
Neural Abstractive Text Summarization with Sequence-to-Sequence Models
Abstract
In the past few years, neural abstractive text summarization with sequence-to-sequence (seq2seq) models have gained a lot of popularity. Many interesting techniques have been proposed to improve seq2seq models, making them capable of handling different challenges, such as saliency, fluency and human readability, and generate high-quality summaries. Generally speaking, most of these techniques differ in one of these three categories: network structure, parameter inference, and decoding/generation. There are also other concerns, such as efficiency and parallelism for training a model. In this article, we provide a comprehensive literature survey on different seq2seq models for abstractive text summarization from the viewpoint of network structures, training strategies, and summary generation algorithms. Several models were first proposed for language modeling and generation tasks, such as machine translation, and later applied to abstractive text summarization. Hence, we also provide a brief review of these models. As part of this survey, we also develop an open source library, namely, Neural Abstractive Text Summarizer (NATS) toolkit, for the abstractive text summarization. An extensive set of experiments have been conducted on the widely used CNN/Daily Mail dataset to examine the effectiveness of several different neural network components. Finally, we benchmark two models implemented in NATS on the two recently released datasets, namely, Newsroom and Bytecup.
Year
DOI
Venue
2018
10.1145/3419106
ACM/IMS Transactions on Data Science
Keywords
DocType
Volume
Abstractive text summarization,attention model,beam search,deep reinforcement learning,pointer-generator network,sequence-to-sequence models
Journal
2
Issue
ISSN
Citations 
1
2691-1922
6
PageRank 
References 
Authors
0.40
75
4
Name
Order
Citations
PageRank
Tian Shi1184.05
Yaser Keneshloo2272.99
Naren Ramakrishnan31913176.25
Chandan K. Reddy480373.50