Title
Context-aware reinforcement learning for course recommendation
Abstract
Online course recommendation is an extremely relevant ingredient for the efficiency of e-learning. The current recommendation methods cannot guarantee the effectiveness and accuracy of course recommendation, especially when a user has enrolled in many different courses. Because these methods fail to distinguish the most relevant historical courses, which can contribute to predicting the target course that indeed reflects the user’s interests from her sequential learning behaviors. In this paper, we propose a context-aware reinforcement learning method, named Hierarchical and Recurrent Reinforcement Learning (HRRL), to efficiently reconstruct user profiles for course recommendation. The key ingredient of our scheme is the novel interaction between an attention-based recommendation model and a profile reviser with Recurrent Reinforcement Learning (RRL) that exploits temporal context. To this aim, a contextual policy gradient with approximation is proposed for RRL. By employing RRL in hierarchical tasks of revising user profiles, the proposed HRRL model enables reliable convergence in revising policy learning and improves the recommendation accuracy. We demonstrate the effectiveness of our proposed method by experiments on two open online courses datasets. Empirical results show that HRRL significantly outperforms state-of-the-art baselines.
Year
DOI
Venue
2022
10.1016/j.asoc.2022.109189
Applied Soft Computing
Keywords
DocType
Volume
Recommender systems,Reinforcement learning,Policy gradient,Attention mechanism
Journal
125
ISSN
Citations 
PageRank 
1568-4946
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Yuanguo Lin100.68
Fan Lin26715.98
Lvqing Yang3134.34
Wenhua Zeng413614.83
Yong Liu529019.62
Pengcheng Wu665430.60