Title
Learning from Sibling Mentions with Scalable Graph Inference in Fine-Grained Entity Typing
Abstract
In this paper, we firstly empirically find that existing models struggle to handle hard mentions due to their insufficient contexts, which consequently limits their overall typing performance. To this end, we propose to exploit sibling mentions for enhancing the mention representations. Specifically, we present two different metrics for sibling selection and employ an attentive graph neural network to aggregate information from sibling mentions. The proposed graph model is scalable in that unseen test mentions are allowed to be added as new nodes for inference. Exhaustive experiments demonstrate the effectiveness of our sibling learning strategy, where our model outperforms ten strong baselines. Moreover, our experiments indeed prove the superiority of sibling mentions in helping clarify the types for hard mentions.
Year
DOI
Venue
2022
10.18653/v1/2022.acl-long.147
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS)
DocType
Volume
Citations 
Conference
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
0
PageRank 
References 
Authors
0.34
0
7
Name
Order
Citations
PageRank
Yi Chen100.34
Jiayang Cheng200.34
Haiyun Jiang301.01
Lemao Liu48718.74
Haisong Zhang5158.00
Shuming Shi662058.27
Xu Ruifeng743253.04