Title | ||
---|---|---|
Content Caching Policy for 5G Network Based on Asynchronous Advantage Actor-Critic Method |
Abstract | ||
---|---|---|
Nowadays content caching at base stations (BSs) has attracted more and more attention in SG networks with the ability of saving resources and reducing data traffic. However, in practice, it's a challenge to design a caching policy intelligently due to the limited storage capacity as well as time and space varying users' requests. In this paper, we propose an algorithm based on asynchronous advantage actor-critic (A3C) to solve the content caching problem. Considering some cooperative BSs, with each BS having a cache, every BS can fetch contents from either neighboring BSs or the backbone network, with different degrees of expenditure. In order to learn the optimal caching and sharing policy, the online A3C-based algorithm is designed to minimize the total transmission cost without knowing content popularity distribution. To evaluate the proposed algorithm, we compare the performance with the classical caching policies, including Least Recently Used (LRU), Least Frequently Used (LFU), Adaptive Replacement Cache (ARC) and one distributed algorithm in the literature. The simulation results show that the proposed A3C-based algorithm can achieve a low transmission cost and improve the convergence rate in the dynamic environment. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/GLOBECOM38437.2019.9014268 | IEEE Global Communications Conference |
Keywords | DocType | ISSN |
Deep reinforcement learning,asynchronous advantage actor-critic,content caching,transmission cost | Conference | 2334-0983 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
6 |