Title
Traffic Scenario Clustering and Load Balancing with Distilled Reinforcement Learning Policies
Abstract
Due to the rapid increase in wireless communication traffic in recent years, load balancing is becoming increasingly important for ensuring the quality of service. However, variations in traffic patterns near different serving base stations make this task challenging. On one hand, crafting a single control policy that performs well across all base station sectors is often difficult. On the other hand, maintaining separate controllers for every sector introduces overhead, and leads to redundancy if some of the sectors experience similar traffic patterns. In this paper, we propose to construct a concise set of controllers that cover a wide range of traffic scenarios, allowing the operator to select a suitable controller for each sector based on local traffic conditions. To construct these controllers, we present a method that clusters similar scenarios and learns a general control policy for each cluster. We use deep reinforcement learning (RL) to first train separate control policies on diverse traffic scenarios, and then incrementally merge together similar RL policies via knowledge distillation. Experimental results show that our concise policy set reduces redundancy with very minor performance degradation compared to policies trained separately on each traffic scenario. Our method also outperforms handcrafted control parameters, joint learning on all tasks, and two popular clustering methods.
Year
DOI
Venue
2022
10.1109/ICC45855.2022.9838370
ICC 2022 - IEEE International Conference on Communications
Keywords
DocType
ISSN
load balancing,reinforcement learning,knowledge distillation
Conference
1550-3607
ISBN
Citations 
PageRank 
978-1-5386-8348-4
0
0.34
References 
Authors
7
7
Name
Order
Citations
PageRank
Jimmy Li100.34
Di Wu2636117.73
Yi Tian Xu303.04
Tianyu Li400.34
Seowoo Jang501.69
Xue Liu68823.33
Gregory Dudek72163255.48