Title | ||
---|---|---|
HETFORMER - Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization. |
Abstract | ||
---|---|---|
To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. To mitigate these issues, this paper proposes HETFORMER, a Transformer-based pre-trained model with multi-granularity sparse attentions for long-text extractive summarization. Specifically, we model different types of semantic nodes in raw text as a potential heterogeneous graph and directly learn heterogeneous relationships (edges) among nodes by Transformer. Extensive experiments on both single- and multi-document summarization tasks show that HETFORMER achieves state-of-the-art performance in Rouge F1 while using less memory and fewer parameters. |
Year | Venue | DocType |
---|---|---|
2021 | EMNLP | Conference |
Volume | Citations | PageRank |
2021.emnlp-main | 0 | 0.34 |
References | Authors | |
0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ye Liu | 1 | 0 | 1.69 |
Jian-Guo Zhang | 2 | 0 | 0.68 |
Yao Wan | 3 | 0 | 2.03 |
Congying Xia | 4 | 22 | 6.49 |
Lifang He | 5 | 369 | 32.74 |
Philip S. Yu | 6 | 30670 | 3474.16 |