Title
Top-aware reinforcement learning based recommendation
Abstract
Reinforcement learning (RL) techniques have recently been introduced to recommender systems. Most existing research works focus on designing policy and learning algorithms of the recommender agent but seldom care about the top-aware issue, i.e., the performance on the top positions is not satisfying, which is crucial for real applications. To address the drawback, we propose a Supervised deep Reinforcement learning Recommendation framework named as SRR. Within this framework, we utilize a supervised learning (SL) model to partially guide the learning of recommendation policy, where the supervision signal and RL signal are jointly employed and updated in a complementary fashion. We empirically find that suitable weak supervision helps to balance the immediate reward and the long-term reward, which nicely addresses the top-aware issue in RL based recommendation. Moreover, we perform a further investigation on how different supervision signals impact on recommendation policy. Extensive experiments are carried out on two real-world datasets under both the offline and simulated online evaluation settings, and the results demonstrate that the proposed methods indeed resolve the top-aware issue without much performance sacrifice in the long-run, compared with the state-of-the-art methods.
Year
DOI
Venue
2020
10.1016/j.neucom.2020.07.057
Neurocomputing
Keywords
DocType
Volume
Recommendation,Top-aware,Reinforcement learning
Journal
417
ISSN
Citations 
PageRank 
0925-2312
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
feng liu118039.13
Ruiming Tang2397.21
Guo Huifeng313415.44
Li Xutao436636.06
Yunming Ye513715.58
Xiuqiang He631239.21