Abstract | ||
---|---|---|
Robotic manipulators are reaching a state where we could see them in household environments in the following decade. Nevertheless, such robots need to be easy to instruct by lay people. This is why kinesthetic teaching has become very popular in recent years, in which the robot is taught a motion that is encoded as a parametric function - usually a Movement Primitive (MP)-. This approach produces trajectories that are usually suboptimal, and the robot needs to be able to improve them through trial-and-error. Such optimization is often done with Policy Search (PS) reinforcement learning, using a given reward function. PS algorithms can be classified as model-free, where neither the environment nor the reward function are modelled, or model-based, which can use a surrogate model of the reward function and/or a model for the dynamics of the task. However, MPs can become very high-dimensional in terms of parameters, which constitute the search space, so their optimization often requires too many samples. In this paper, we assume we have a robot motion task characterized with an MP of which we cannot model the dynamics. We build a surrogate model for the reward function, that maps an MP parameter latent space (obtained through a Mutual-information-weighted Gaussian Process Latent Variable Model) into a reward. While we do not model the task dynamics, using mutual information to shrink the task space makes it more consistent with the reward and so the policy improvement is faster in terms of sample efficiency. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/ICRA40945.2020.9196658 | 2020 IEEE International Conference on Robotics and Automation (ICRA) |
Keywords | DocType | Volume |
sample-efficient robot motion learning,Gaussian process latent variable models,robotic manipulators,household environments,kinesthetic teaching,parametric function,movement primitive,mutual-information-weighted Gaussian process latent variable model,trial-and-error,trajectory production,task dynamics,MP parameter latent space,robot motion task,search space,surrogate model,PS algorithms,policy search reinforcement learning | Conference | 2020 |
Issue | ISSN | ISBN |
1 | 1050-4729 | 978-1-7281-7396-2 |
Citations | PageRank | References |
1 | 0.36 | 6 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Juan Antonio Delgado-Guerrero | 1 | 1 | 0.36 |
Adria Colome | 2 | 30 | 5.89 |
Carme Torras | 3 | 1155 | 115.66 |