Title
Knowledge-Aware Autoencoders For Explainable Recommender Systems
Abstract
Recommender Systems have been widely used to help users in finding what they are looking for thus tackling the information overload problem. After several years of research and industrial findings looking after better algorithms to improve accuracy and diversity metrics, explanation services for recommendation are gaining momentum as a tool to provide a human-understandable feedback to results computed, in most of the cases, by black-box machine learning techniques. As a matter of fact, explanations may guarantee users satisfaction, trust, and loyalty in a system. In this paper, we evaluate how different information encoded in a Knowledge Graph are perceived by users when they are adopted to show them an explanation. More precisely, we compare how the use of categorical information, factual one or a mixture of them both in building explanations, affect explanatory criteria for a recommender system. Experimental results are validated through an A/B testing platform which uses a recommendation engine based on a Semantics-Aware Autoencoder to build users profiles which are in turn exploited to compute recommendation lists and to provide an explanation.
Year
DOI
Venue
2018
10.1145/3270323.3270327
PROCEEDINGS OF THE 3RD WORKSHOP ON DEEP LEARNING FOR RECOMMENDER SYSTEMS (DLRS)
Keywords
DocType
Volume
Explanation, Explainable Models, Recommender Systems, Deep Learning, Autoencoder Neural Networks
Conference
abs/1807.06300
Citations 
PageRank 
References 
0
0.34
16
Authors
5
Name
Order
Citations
PageRank
Vito Bellini100.34
Angelo Schiavone201.35
Tommaso Di Noia31857152.07
Azzurra Ragone451140.86
Eugenio Di Sciascio51733147.71