Title
Prototypical Representation Learning for Relation Extraction
Abstract
Recognizing relations between entities is a pivotal task of relational learning. Previously, the learning of relation representations from unstructured entity-linked corpora is less studied because of the rich, complicated expressions of relations in human language. This paper aims to learn predictive, interpretable, and robust relation representations from textual data that are effective in different settings, including supervised, distantly supervised, and few-shot learning. Instead of solely relying on the supervision from labels (which could be noisy), we propose to infer a latent prototype for each relation from contextual information to best explore the intrinsic semantics of relations. Prototypes are representations in a latent space abstracting canonical relevance between entities in the textual data. We learn the prototypes with a collaborative metric learning approach that uses hybrid metric functions to measure prototype-statement and statement-statement similarities. With the collaborative strategy, this approach not only helps us to train effective encoders but also outputs meaningful, interpretable latent prototypes for the final classification. Experimental results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art models.
Year
Venue
DocType
2021
ICLR
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
10
Name
Order
Citations
PageRank
Ning Ding111.71
Xiaobin Wang29412.59
Yao Fu300.34
Guangwei Xu499.18
Rui Wang5547.75
Pengjun Xie600.68
Shen Ying77323.48
Fei Huang850656.44
Zheng Hai-Tao914224.39
Rui Zhang10120867.26