Title
Load Balancing for Communication Networks via Data-Efficient Deep Reinforcement Learning
Abstract
Within a cellular network, load balancing between different cells is of critical importance to network performance and quality of service. Most existing load balancing algorithms are manually designed and tuned rule-based methods where near-optimality is almost impossible to achieve. These rule-based methods are difficult to adapt quickly to traffic changes in real-world environments. Given the success of Reinforcement Learning (RL) algorithms in many application domains, there have been a number of efforts to tackle load balancing for communication systems using RL-based methods. To our knowledge, none of these efforts have addressed the need for data efficiency within the RL framework, which is one of the main obstacles in applying RL to wireless network load balancing. In this paper, we formulate the communication load balancing problem as a Markov Decision Process and propose a data-efficient transfer deep reinforcement learning algorithm to address it. Experimental results show that the proposed method can significantly improve the system performance over other baselines and is more robust to environmental changes.
Year
DOI
Venue
2021
10.1109/GLOBECOM46510.2021.9685294
2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM)
Keywords
DocType
ISSN
load balancing, reinforcement learning, transfer learning
Conference
2334-0983
Citations 
PageRank 
References 
0
0.34
0
Authors
12
Name
Order
Citations
PageRank
Di Wu100.68
Jikun Kang201.01
Yi Tian Xu303.04
Hang Li401.35
Jimmy Li500.34
Xi Chen601.01
Dmitriy Rivkin701.35
Michael Jenkin832157.35
Taeseop Lee900.68
Intaik Park1000.68
Xue Liu1162.75
Gregory Dudek1276.16