Abstract | ||
---|---|---|
The dramatic proliferation on emerging Internet-of-Things (IoT) makes our telecommunications networks more and more congested. Due to the flexible deployment and spectrum supplement capabilities, cognitive radio based unmanned aerial vehicles (CUAVs) have been regarded as a promising solution to help the network offload the overwhelming traffic. For the CUAV-assisted network, how to offload as much traffic as possible is significant. It is necessary to jointly consider both sides on data collection and data transmission, which, however, is a very challenging problem due to the heterogeneous and uncertain environment on both traffic demand and spectrum availability. In this paper, aiming at maximizing the offloaded traffic, we propose a joint strategy on trajectory design, time division, and spectrum access. Considering the unobtainable environmental information on both traffic demand and spectrum availability, we further develop a model-free deep reinforcement learning (DRL) based solution for the (TS)-S-2 joint strategy, so that the CUAV could make the best decisions autonomously under the uncertain environment. Simulation results have shown the effectiveness of the designed DRL solution and also the offloading efficiency of the proposed (TS)-S-2 strategy. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/GLOBECOM46510.2021.9685406 | 2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM) |
Keywords | DocType | ISSN |
CUAV-assisted network, traffic offloading, deep reinforcement learning | Conference | 2334-0983 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xuanheng Li | 1 | 7 | 1.53 |
Sike Cheng | 2 | 0 | 0.34 |
Nan Zhao | 3 | 1591 | 123.85 |
Nianmin Yao | 4 | 159 | 21.57 |