Abstract | ||
---|---|---|
Experience-driven networking has emerged as a new and highly effective approach for resource allocation in complex communication networks. Deep Reinforcement Learning (DRL) has been shown to be a useful technique for enabling experience-driven networking. In this paper, we focus on a practical and fundamental problem for experience-driven networking: when network configurations are changed, how to train a new DRL agent to effectively and quickly adapt to the new environment. We present an Actor-Critic-based Transfer learning framework for the Traffic Engineering (TE) problem using policy distillation, which we call ACT-TE. ACT-TE effectively and quickly trains a new DRL agent to solve the TE problem in a new network environment, using both old knowledge (i.e., distilled from the existing agent) and new experience (i.e., newly collected samples). We implement ACT-TE in ns-3, and compare it with commonly-used baselines using packet-level simulations on three representative network topologies: NSFNET, ARPANET and random topology. The extensive simulation results show that 1) The existing well-trained DRL agents do not work well in new network environments; 2) ACT-TE significantly outperforms both two straightforward methods (training from scratch and fine-tuning based on an existing DRL agent) and several widely-used traditional methods in terms of network utility, throughput and delay. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/TNET.2020.3037231 | IEEE/ACM Transactions on Networking |
Keywords | DocType | Volume |
Experience-driven networking,deep reinforcement learning and transfer learning | Journal | 29 |
Issue | ISSN | Citations |
1 | 1063-6692 | 1 |
PageRank | References | Authors |
0.37 | 0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhiyuan Xu | 1 | 73 | 6.42 |
Dejun Yang | 2 | 1685 | 93.08 |
Jian Tang | 3 | 1095 | 74.34 |
Yinan Tang | 4 | 4 | 3.78 |
Tongtong Yuan | 5 | 5 | 4.12 |
Yanzhi Wang | 6 | 1082 | 136.11 |
Guoliang Xue | 7 | 48 | 9.12 |