Title
Personalized Knowledge Distillation for Recommender System
Abstract
Nowadays, Knowledge Distillation (KD) has been widely studied for recommender system. KD is a model-independent strategy that generates a small but powerful student model by transferring knowledge from a pre-trained large teacher model. Recent work has shown that the knowledge from the teacher’s representation space significantly improves the student model. The state-of-the-art method, named Distillation Experts (DE), adopts cluster-wise distillation that transfers the knowledge of each representation cluster separately to distill the various preference knowledge in a balanced manner. However, it is challenging to apply DE to a new environment since its performance is highly dependent on several key assumptions and hyperparameters that need to be tuned for each dataset and each base model. In this work, we propose a novel method, dubbed Personalized Hint Regression (PHR), distilling the preference knowledge in a balanced way without relying on any assumption on the representation space nor any method-specific hyperparameters. To circumvent the clustering, PHR employs personalization network that enables a personalized distillation to the student space for each user/item representation, which can be viewed as a generalization of DE. Extensive experiments conducted on real-world datasets show that PHR achieves comparable or even better performance to DE tuned by a grid search for all of its hyperparameters.
Year
DOI
Venue
2022
10.1016/j.knosys.2021.107958
Knowledge-Based Systems
Keywords
DocType
Volume
Recommender System,Knowledge Distillation,Model compression,Retrieval efficiency
Journal
239
ISSN
Citations 
PageRank 
0950-7051
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
SeongKu Kang100.34
Dongha Lee200.34
Wonbin Kweon300.34
Hwanjo Yu41715114.02