Title
Reinforcement Learning to Optimize Long-term User Engagement in Recommender Systems.
Abstract
Recommender systems play a crucial role in our daily lives. Feed streaming mechanism has been widely used in the recommender system, especially on the mobile Apps. The feed streaming setting provides users the interactive manner of recommendation in never-ending feeds. In such an interactive manner, a good recommender system should pay more attention to user stickiness, which is far beyond classical instant metrics, and typically measured by {bf long-term user engagement}. Directly optimizing the long-term user engagement is a non-trivial problem, as the learning target is usually not available for conventional supervised learning methods. Though reinforcement learning~(RL) naturally fits the problem of maximizing the long term rewards, applying RL to optimize long-term user engagement is still facing challenges: user behaviors are versatile and difficult to model, which typically consists of both instant feedback~(eg clicks, ordering) and delayed feedback~(eg dwell time, revisit); in addition, performing effective off-policy learning is still immature, especially when combining bootstrapping and function approximation. To address these issues, in this work, we introduce a reinforcement learning framework --- FeedRec to optimize the long-term user engagement. FeedRec includes two components: 1)~a Q-Network which designed in hierarchical LSTM takes charge of modeling complex user behaviors, and 2)~an S-Network, which simulates the environment, assists the Q-Network and voids the instability of convergence in policy learning. Extensive experiments on synthetic data and a real-world large scale data show that FeedRec effectively optimizes the long-term user engagement and outperforms state-of-the-arts.
Year
DOI
Venue
2019
10.1145/3292500.3330668
KDD
Keywords
Field
DocType
long-term user engagement, recommender system, reinforcement learning
Convergence (routing),Recommender system,Function approximation,Information retrieval,Computer science,Bootstrapping,Supervised learning,Human–computer interaction,Synthetic data,Reinforcement,Reinforcement learning
Journal
Volume
ISBN
Citations 
abs/1902.05570
978-1-4503-6201-6
18
PageRank 
References 
Authors
0.70
23
6
Name
Order
Citations
PageRank
Lixin Zou1394.81
Long Xia22118.86
Zhuoye Ding315011.23
Jiaxing Song4509.62
Weidong Liu59317.66
Dawei Yin686661.99