Title
Locality-Sensitive State-Guided Experience Replay Optimization for Sparse Rewards in Online Recommendation
Abstract
Online recommendation requires handling rapidly changing user preferences. Deep reinforcement learning (DRL) is an effective means of capturing users' dynamic interest during interactions with recommender systems. Generally, it is challenging to train a DRL agent in online recommender systems because of the sparse rewards caused by the large action space (e.g., candidate item space) and comparatively fewer user interactions. Leveraging experience replay (ER) has been extensively studied to conquer the issue of sparse rewards. However, they adapt poorly to the complex environment of online recommender systems and are inefficient in learning an optimal strategy from past experience. As a step to filling this gap, we propose a novel state-aware experience replay model, in which the agent selectively discovers the most relevant and salient experiences and is guided to find the optimal policy for online recommendations. In particular, a locality-sensitive hashing method is proposed to selectively retain the most meaningful experience at scale and a prioritized reward-driven strategy is designed to replay more valuable experiences with higher chance. We formally show that the proposed method guarantees the upper and lower bound on experience replay and optimizes the space complexity, as well as empirically demonstrate our model's superiority to several existing experience replay methods over three benchmark simulation platforms.
Year
DOI
Venue
2022
10.1145/3477495.3532015
SIGIR '22: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval
Keywords
DocType
Citations 
Recommender Systems, Deep Reinforcement Learning, Experience Replay
Conference
0
PageRank 
References 
Authors
0.34
11
6
Name
Order
Citations
PageRank
Xiaocong Chen172.49
Lina Yao298193.63
Julian John McAuley32856115.30
Weili Guan44310.84
Xiaojun Chang5158576.85
Xianzhi Wang627640.32