Abstract | ||
---|---|---|
Mobile robotic systems are normally confronted with the shortage of on-board resources such as computing capabilities and energy, as well as significantly influenced by the dynamics of surrounding environmental conditions. This context requires adaptive decisions at run-time that react to the dynamic and uncertain operational circumstances for guaranteeing the performance requirements while respecting the other constraints. In this paper, we propose a reinforcement learning (RL)-based approach for Quality of Service QoS and energy-aware autonomous robotic mission manager. The mobile robotic mission manager leverages the idea of (RL) by monitoring actively the state of performance and energy consumption of the mission and then selecting the best mapping parameter configuration by evaluating an accumulative reward feedback balancing between QoS and energy. As a case study, we apply this methodology to an autonomous navigation mission. Our simulation results demonstrate the efficiency of the proposed management framework and provide a promising solution for the real mobile robotic systems. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1142/S1793351X19400221 | INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING |
Keywords | DocType | Volume |
Robotic run-time adaptation, reinforcement learning, non-functional requirements, quality of service, energy efficiency, autonomous mobile robots | Journal | 13 |
Issue | ISSN | Citations |
4 | 1793-351X | 0 |
PageRank | References | Authors |
0.34 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Dinh-Khanh Ho | 1 | 0 | 0.68 |
Karim Ben Chehida | 2 | 0 | 0.34 |
Benoît Miramond | 3 | 0 | 0.34 |
Michel Auguin | 4 | 238 | 35.10 |