Abstract | ||
---|---|---|
Information extraction tasks such as entity relation extraction and event extraction are of great importance for natural language processing and knowledge graph construction. In this paper, we revisit the end-to-end information extraction task for sequence generation. Since generative information extraction may struggle to capture long-term dependencies and generate unfaithful triples, we introduce a novel model, contrastive information extraction with a generative transformer. Specifically, we introduce a single shared transformer module for an encoder-decoder-based generation. To generate faithful results, we propose a novel triplet contrastive training object. Moreover, we introduce two mechanisms to further improve model performance (i.e., batch-wise dynamic attention-masking and triple-wise calibration). Experimental results on five datasets (i.e., NYT, WebNLG, MIE, ACE-2005, and MUC-4) show that our approach achieves better performance than baselines.(1) |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/TASLP.2021.3110126 | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING |
Keywords | DocType | Volume |
Task analysis, Data mining, Transformers, Information retrieval, Feature extraction, Speech processing, Pipelines, Event extraction, information extraction, triple extraction | Journal | 29 |
Issue | ISSN | Citations |
1 | 2329-9290 | 1 |
PageRank | References | Authors |
0.44 | 14 | 8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ningyu Zhang | 1 | 63 | 18.56 |
Hongbin Ye | 2 | 8 | 2.89 |
Shumin Deng | 3 | 32 | 10.61 |
Chuanqi Tan | 4 | 1 | 0.44 |
Mosha Chen | 5 | 2 | 3.50 |
Songfang Huang | 6 | 180 | 19.51 |
Fei Huang | 7 | 2 | 7.54 |
Huanhuan Chen | 8 | 731 | 101.79 |