Title
State representation modeling for deep reinforcement learning based recommendation
Abstract
Reinforcement learning techniques have recently been introduced to interactive recommender systems to capture the dynamic patterns of user behavior during the interaction with recommender systems and perform planning to optimize long-term performance. Most existing research work focuses on designing policy and learning algorithms of the recommender agent but seldom cares about the state representation of the environment, which is indeed essential for the recommendation decision making. In this paper, we first formulate the interactive recommender system problem with a deep reinforcement learning recommendation framework. Within this framework, we then carefully design four state representation schemes for learning the recommendation policy. Inspired by recent advances in feature interaction modeling in user response prediction, we discover that explicitly modeling user–item interactions in state representation can largely help the recommendation policy perform effective reinforcement learning. Extensive experiments on four real-world datasets are conducted under both the offline and simulated online evaluation settings. The experimental results demonstrate the proposed state representation schemes lead to better performance over the state-of-the-art methods.
Year
DOI
Venue
2020
10.1016/j.knosys.2020.106170
Knowledge-Based Systems
Keywords
DocType
Volume
State representation modeling,Deep reinforcement learning,Recommendation
Journal
205
ISSN
Citations 
PageRank 
0950-7051
0
0.34
References 
Authors
0
9
Name
Order
Citations
PageRank
feng liu118039.13
Ruiming Tang2397.21
Li Xutao336636.06
Weinan Zhang4122897.24
Yunming Ye513715.58
Haokun Chen6101.84
Guo Huifeng713415.44
Yuzhou Zhang851.50
Xiuqiang He931239.21