Title
Obtaining fault tolerance avoidance behavior using deep reinforcement learning.
Abstract
In this article, a mapless movement policy for mobile agents, designed specifically to be fault-tolerant, is presented. The provided policy, which is learned using deep reinforcement learning, has advantages compared to the usual mapless policies: this policy is capable of handling a robot even when some of its sensors are broken. It is an end-to-end policy based on three neuronal models capable not only of moving the robot and maximizing the coverage of the environment but also of learning the best movement behavior to adapt it to its perception needs. A custom robot, for which none of the readings of the sensors overlap each other, has been used. This setup makes it possible to determine the operation of a robust failure policy, since the failure of a sensor unequivocally affects the perceptions. The proposed system exhibits several advantages in terms of robustness, extensibility and utility. The system has been trained and tested exhaustively in a simulator, obtaining very good results. It has also been transferred to real robots, verifying the generalization and the good functioning of our model in real environments.
Year
DOI
Venue
2019
10.1016/j.neucom.2018.11.090
Neurocomputing
Keywords
Field
DocType
Deep reinforcement learning,Obstacle avoidance,Fault tolerance
Obstacle avoidance,Robustness (computer science),Fault tolerance,Artificial intelligence,Robot,Perception,Extensibility,Mathematics,Machine learning,Reinforcement learning
Journal
Volume
ISSN
Citations 
345
0925-2312
1
PageRank 
References 
Authors
0.37
18
3
Name
Order
Citations
PageRank
fidel aznar gregori173.19
Mar Pujol López2288.54
R. Rizo35114.90