Title
Latent Relation Language Models
Abstract
In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations. This model has a number of attractive properties: it not only improves language modeling performance, but is also able to annotate the posterior probability of entity spans for a given text through relations. Experiments demonstrate empirical improvements over both word-based language models and a previous approach that incorporates knowledge graph information. Qualitative analysis further demonstrates the proposed model's ability to learn to predict appropriate relations in context.
Year
Venue
DocType
2020
national conference on artificial intelligence
Conference
Volume
ISSN
Citations 
34
2159-5399
1
PageRank 
References 
Authors
0.36
0
4
Name
Order
Citations
PageRank
Hiroaki Hayashi121.08
Zecong Hu211.04
Chen-Yan Xiong340530.82
Graham Neubig4989130.31