Abstract | ||
---|---|---|
In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations. This model has a number of attractive properties: it not only improves language modeling performance, but is also able to annotate the posterior probability of entity spans for a given text through relations. Experiments demonstrate empirical improvements over both word-based language models and a previous approach that incorporates knowledge graph information. Qualitative analysis further demonstrates the proposed model's ability to learn to predict appropriate relations in context. |
Year | Venue | DocType |
---|---|---|
2020 | national conference on artificial intelligence | Conference |
Volume | ISSN | Citations |
34 | 2159-5399 | 1 |
PageRank | References | Authors |
0.36 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hiroaki Hayashi | 1 | 2 | 1.08 |
Zecong Hu | 2 | 1 | 1.04 |
Chen-Yan Xiong | 3 | 405 | 30.82 |
Graham Neubig | 4 | 989 | 130.31 |