Title
Autonomous UAV Trajectory for Localizing Ground Objects: A Reinforcement Learning Approach
Abstract
Disaster management, search and rescue missions, and health monitoring are examples of critical applications that require object localization with high precision and sometimes in a timely manner. In the absence of the global positioning system (GPS), the radio received signal strength index (RSSI) can be used for localization purposes due to its simplicity and cost-effectiveness. However, due to the low accuracy of RSSI, unmanned aerial vehicles (UAVs) or drones may be used as an efficient solution for improved localization accuracy due to their agility and higher probability of line-of-sight (LoS). Hence, in this context, we propose a novel framework based on reinforcement learning (RL) to enable a UAV (agent) to autonomously find its trajectory that results in improving the localization accuracy of multiple objects in shortest time and path length, fewer signal-strength measurements (waypoints), and/or lower UAV energy consumption. In particular, we first control the agent through initial scan trajectory on the whole region to 1) know the number of nodes and estimate their initial locations, and 2) train the agent online during operation. Then, the agent forms its trajectory by using RL to choose the next waypoints in order to minimize the average location errors of all objects. Our framework includes detailed UAV to ground channel characteristics with an empirical path loss and log-normal shadowing model, and also with an elaborate energy consumption model. We investigate and compare the localization precision of our approach with existing methods from the literature by varying the UAV's trajectory length, energy, number of waypoints, and time. Furthermore, we study the impact of the UAV's velocity, altitude, hovering time, communication range, number of maximum RSSI measurements, and number of objects. The results show the superiority of our method over the state-of-art and demonstrates its fast reduction of the localization error.
Year
DOI
Venue
2021
10.1109/TMC.2020.2966989
IEEE Transactions on Mobile Computing
Keywords
DocType
Volume
Localization,reinforcement learning,Q-Learning,unmanned aerial vehicles (UAVs),drones,trajectory planning,received signal strength (RSS)
Journal
20
Issue
ISSN
Citations 
4
1536-1233
7
PageRank 
References 
Authors
0.48
0
4
Name
Order
Citations
PageRank
Dariush Ebrahimi112612.81
sanaa sharafeddine214523.26
Pin-Han Ho33020233.38
Chadi Assi41357137.73