Title
Deep Reinforcement Learning for Edge Computing and Resource Allocation in 5G Beyond.
Abstract
By extending computation capacity to the edge of wireless networks, edge computing has the potential to enable computation-intensive and delay-sensitive applications in 5G and beyond via computation offloading. However, in multi-user heterogeneous networks, it is challenging to capture complete network information, such as wireless channel state, available bandwidth or computation resources. The strong couplings among devices on application requirements or radio access mode make it more difficult to design an optimal computation offloading scheme. Deep Reinforcement Learning (DRL) is an emerging technique to address such an issue with limited and less accurate network information. In this paper, we utilize DRL to design an optimal computation offloading and resource allocation strategy for minimizing system energy consumption. We first present a multi-user edge computing framework in heterogeneous networks. Then, we formulate the joint computation offloading and resource allocation problem as a DRL form and propose a new DRL-inspired algorithm to minimize system energy consumption. Numerical results based on a realworld dataset demonstrate demonstrate the effectiveness of our proposed algorithm, compared to two benchmark solutions.
Year
DOI
Venue
2019
10.1109/ICCT46805.2019.8947146
ICCT
Field
DocType
Citations 
Edge computing,Wireless network,Computer science,Computer network,Computation offloading,Resource allocation,Heterogeneous network,Energy consumption,Computation,Reinforcement learning,Distributed computing
Conference
0
PageRank 
References 
Authors
0.34
11
6
Name
Order
Citations
PageRank
Yueyue Dai111.36
Xu Du23715.92
Ke Zhang354032.46
Yunlong Lu400.34
Sabita Maharjan5107852.89
Yan Zhang65818354.13