Title
Energy-Efficient UAV Movement Control for Fair Communication Coverage: A Deep Reinforcement Learning Approach
Abstract
Unmanned Aerial Vehicles (UAVs) are considered an important element in wireless communication networks due to their agility, mobility, and ability to be deployed as mobile base stations (BSs) in the network to improve the communication quality and coverage area. UAVs can be used to provide communication services for ground users in different scenarios, such as transportation systems, disaster situations, emergency cases, and surveillance. However, covering a specific area under a dynamic environment for a long time using UAV technology is quite challenging due to its limited energy resources, short communication range, and flying regulations and rules. Hence, a distributed solution is needed to overcome these limitations and to handle the interactions among UAVs, which leads to a large state space. In this paper, we introduced a novel distributed control solution to place a group of UAVs in the candidate area in order to improve the coverage score with minimum energy consumption and a high fairness value. The new algorithm is called the state-based game with actor-critic (SBG-AC). To simplify the complex interactions in the problem, we model SBG-AC using a state-based potential game. Then, we merge SBG-AC with an actor-critic algorithm to assure the convergence of the model, to control each UAV in a distributed way, and to have learning capabilities in case of dynamic environments. Simulation results show that the SBG-AC outperforms the distributed DRL and the DRL-EC3 in terms of fairness, coverage score, and energy consumption.
Year
DOI
Venue
2022
10.3390/s22051919
SENSORS
Keywords
DocType
Volume
UAV, fairness, coverage score, reinforcement learning, actor-critic
Journal
22
Issue
ISSN
Citations 
5
1424-8220
1
PageRank 
References 
Authors
0.43
0
4
Name
Order
Citations
PageRank
Ibrahim A Nemer110.43
Tarek R Sheltami210.43
Slim Belhaiza310.43
Ashraf S. Mahmoud44310.65