Title
Contextual Bandits for Multi-objective Recommender Systems.
Abstract
The contextual bandit framework have become a popular solution for online interactive recommender systems. Traditionally, the literature in interactive recommender systems has been focused on recommendation accuracy. However, it has been increasingly recognized that accuracy is not enough as the only quality criteria. Thus, other concepts have been suggested to improve recommendation evaluation, such as diversity and novelty. Simultaneously considering multiple criteria in payoff functions leads to a multi-objective recommendation. In this paper, we model the payoff function of contextual bandits to considering accuracy, diversity and novelty simultaneously. We evaluated our proposed algorithm on the Yahoo! Front Page Module dataset that contains over 33 million events. Results showed that: (a) we are able to improve recommendation quality when equally considering all objectives, and (b) we allow for adjusting the compromise between accuracy, diversity and novelty, so that recommendation emphasis can be adjusted according to the needs of different users.
Year
DOI
Venue
2015
10.1109/BRACIS.2015.67
BRACIS
Keywords
Field
DocType
Online Recommender Systems, Multi-armed Bandits, Multi-objective
Recommender system,Multiple criteria,Computer science,Artificial intelligence,Compromise,Novelty,Machine learning,Stochastic game
Conference
Citations 
PageRank 
References 
2
0.40
28
Authors
1
Name
Order
Citations
PageRank
Anisio Lacerda1261.92