Title
Contrastive Information Extraction With Generative Transformer
Abstract
Information extraction tasks such as entity relation extraction and event extraction are of great importance for natural language processing and knowledge graph construction. In this paper, we revisit the end-to-end information extraction task for sequence generation. Since generative information extraction may struggle to capture long-term dependencies and generate unfaithful triples, we introduce a novel model, contrastive information extraction with a generative transformer. Specifically, we introduce a single shared transformer module for an encoder-decoder-based generation. To generate faithful results, we propose a novel triplet contrastive training object. Moreover, we introduce two mechanisms to further improve model performance (i.e., batch-wise dynamic attention-masking and triple-wise calibration). Experimental results on five datasets (i.e., NYT, WebNLG, MIE, ACE-2005, and MUC-4) show that our approach achieves better performance than baselines.(1)
Year
DOI
Venue
2021
10.1109/TASLP.2021.3110126
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING
Keywords
DocType
Volume
Task analysis, Data mining, Transformers, Information retrieval, Feature extraction, Speech processing, Pipelines, Event extraction, information extraction, triple extraction
Journal
29
Issue
ISSN
Citations 
1
2329-9290
1
PageRank 
References 
Authors
0.44
14
8
Name
Order
Citations
PageRank
Ningyu Zhang16318.56
Hongbin Ye282.89
Shumin Deng33210.61
Chuanqi Tan410.44
Mosha Chen523.50
Songfang Huang618019.51
Fei Huang727.54
Huanhuan Chen8731101.79