Title
How to Make Latent Factors Interpretable by Feeding Factorization Machines with Knowledge Graphs
Abstract
Model-based approaches to recommendation can recommend items with a very high level of accuracy. Unfortunately, even when the model embeds content-based information, if we move to a latent space we miss references to the actual semantics of recommended items. Consequently, this makes non-trivial the interpretation of a recommendation process. In this paper, we show how to initialize latent factors in Factorization Machines by using semantic features coming from a knowledge graph in order to train an interpretable model. With our model, semantic features are injected into the learning process to retain the original informativeness of the items available in the dataset. The accuracy and effectiveness of the trained model have been tested using two well-known recommender systems datasets. By relying on the information encoded in the original knowledge graph, we have also evaluated the semantic accuracy and robustness for the knowledge-aware interpretability of the final model.
Year
DOI
Venue
2019
10.1007/978-3-030-30793-6_3
Lecture Notes in Computer Science
DocType
Volume
ISSN
Conference
11778
0302-9743
Citations 
PageRank 
References 
10
0.44
0
Authors
5
Name
Order
Citations
PageRank
Vito Walter Anelli19118.45
Tommaso Di Noia2242.31
Eugenio Di Sciascio31733147.71
Azzurra Ragone451140.86
Joseph Trotta5222.28