Title
Two-Phase Multi-armed Bandit for Online Recommendation
Abstract
Personalized online recommendations strive to adapt their services to individual users by making use of both item and user information. Despite recent progress, the issue of balancing exploitation-exploration (EE) [1] remains challenging. In this paper, we model the personalized online recommendation of e-commence as a two-phase multi-armed bandit problem. This is the first time that “big arm” and...
Year
DOI
Venue
2021
10.1109/DSAA53316.2021.9564225
2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)
Keywords
DocType
ISBN
Measurement,Adaptation models,Scalability,Conferences,Computational modeling,Clustering algorithms,Data science
Conference
978-1-6654-2099-0
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Cairong Yan1199.25
Haixia Han200.34
Zijian Wang321.72
Yanting Zhang432.80