Abstract | ||
---|---|---|
Just like humans, robots can improve their performance by practicing, i.e. by performing the desired behavior many times and updating the underlying skill representation using the newly gathered data. In this paper, we propose to implement robot practicing by applying statistical and reinforcement learning (RL) in a latent space of the selected skill representation. The latent space is computed by a deep autoencoder neural network, with the data to train the network generated in simulation. However, we show that the resulting latent space representation is useful also for learning on a real robot. Our simulation and real-world results demonstrate that by exploiting the latent space of the underlying motor skill representation, a significant reduction of the amount of data needed for effective learning by Gaussian Process Regression (GPR) can be achieved. Similarly, the number of RL epochs can be significantly reduced. Finally, it is evident from our results that an autoencoder-based latent space is more effective for these purposes than a latent space computed by principal component analysis. (c) 2020 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). |
Year | DOI | Venue |
---|---|---|
2021 | 10.1016/j.robot.2020.103690 | ROBOTICS AND AUTONOMOUS SYSTEMS |
Keywords | DocType | Volume |
Skill learning, Latent space representations, Deep autoencoder neural networks | Journal | 135 |
ISSN | Citations | PageRank |
0921-8890 | 1 | 0.35 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Rok Pahic | 1 | 1 | 2.04 |
Zvezdan Loncarevic | 2 | 1 | 0.35 |
Andrej Gams | 3 | 385 | 29.54 |
Ales Ude | 4 | 898 | 85.11 |