Title
UAV-Assisted Wireless Energy and Data Transfer With Deep Reinforcement Learning
Abstract
As a typical scenario in future generation communication network applications, UAV-assisted communication can perform autonomous data delivery for massive machine type communication (mMTC), where the data generated from Internet of Things (IoT) devices can be carried and delivered to the corresponding locations with no direct communication channels to the IoT devices. Wireless energy transfer technique can recharge the UAV when the system is in operation, assisting the UAV to continuously collect and deliver data. In this work, we formulate a Markov decision process (MDP) model to describe the energy and data transfer optimization problem for the UAV. To maximize the long-term utility of the UAV, the MDP model is solved by value iteration algorithm to obtain the optimal strategies of the UAV to collect data, deliver data, and receive transferred energy to replenish on-device battery energy storage. Furthermore, to tackle the issues of system state uncertainties, partially observable states, and large state space in UAV-assisted communication systems, we extend the MDP model and solve it by using a Q -learning and a deep reinforcement learning (DRL) schemes. Simulations and numerical results validate that, compared with baseline schemes, the proposed MDP model with DRL based scheme can achieve better wireless energy and data transfer strategies in terms of the higher long-term utility of the UAV.
Year
DOI
Venue
2021
10.1109/TCCN.2020.3027696
IEEE Transactions on Cognitive Communications and Networking
Keywords
DocType
Volume
Unmanned aerial vehicle,wireless energy transfer,Internet of Things,Markov decision process,deep reinforcement learning
Journal
7
Issue
ISSN
Citations 
1
2332-7731
4
PageRank 
References 
Authors
0.38
0
7
Name
Order
Citations
PageRank
Zehui Xiong158654.94
Yang Zhang231319.51
Wei Yang Lim319313.37
Jiawen Kang454331.46
Niyato Dusit59486547.06
Cyril Leung6648.90
Chunyan Miao72307195.72