Title
Language Models as Knowledge Embeddings.
Abstract
Knowledge embeddings (KE) represent a knowledge graph (KG) by embedding entities and relations into continuous vector spaces. Existing methods are mainly structure-based or description-based. Structure-based methods learn representations that preserve the inherent structure of KGs. They cannot well represent abundant long-tail entities in real-world KGs with limited structural information. Description-based methods leverage textual information and language models. Prior approaches in this direction barely outperform structure-based ones, and suffer from problems like expensive negative sampling and restrictive description demand. In this paper, we propose LMKE, which adopts Language Models to derive Knowledge Embeddings, aiming at both enriching representations of long-tail entities and solving problems of prior description-based methods. We formulate description-based KE learning with a contrastive learning framework to improve efficiency in training and evaluation. Experimental results show that LMKE achieves state-of-the-art performance on KE benchmarks of link prediction and triple classification, especially for long-tail entities.
Year
DOI
Venue
2022
10.24963/ijcai.2022/318
European Conference on Artificial Intelligence
Keywords
DocType
Citations 
Data Mining: Knowledge Graphs and Knowledge Base Completion,Natural Language Processing: Language Models,Natural Language Processing: Embeddings,Machine Learning: Representation learning
Conference
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Xintao Wang100.34
Qianyu He200.68
Jiaqing Liang3379.59
Yanghua Xiao448254.90