Title
Reinforcement-Learning-Based Optimization for Content Delivery Policy in Cache-Enabled HetNets
Abstract
Caching popular contents at radio access networks is a promising approach to improve the content delivery efficiency. Most of the existing content delivery schemes focus on the perspective of content providers, paying less attention to the service demand of content requesters. In this paper, we investigate the content delivery policy of a mobile device with service delay constraint in a cache-enabled heterogeneous network (HetNet), where a macro base station (MBS) is overlaid with some small base stations (SBS) with caches. In the considered network, the mobile device needs to make content delivery decisions based on the time, cache state, and signal-to-interference-plus-noise ratio (SINR) state. The problem of solving an optimal content delivery policy is modeled as a Markov decision process (MDP), where the objective is to minimize the delivery cost of the mobile device under the constraint of content service deadline. In order to address this problem, we propose a reinforcement learning (RL) algorithm to learn the optimal policy. The simulation results demonstrate that our proposed RL-based policy achieves a significant improvement in content delivery cost compared with other benchmark solutions.
Year
DOI
Venue
2019
10.1109/GLOBECOM38437.2019.9013975
IEEE Global Communications Conference
Keywords
DocType
ISSN
Heterogeneous networks,content delivery,Markov decision process,reinforcement learning
Conference
2334-0983
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Zhaojun Nan101.01
Yunjian Jia26713.92
zhengchuan chen3274.55
Liang Liang4477.43