Title
Learning an Embedding Space for Transferable Robot Skills
Abstract
We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropy-regularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.
Year
Venue
Field
2018
international conference on learning representations
Parameterized complexity,Embedding,Inference,Computer science,Interpolation,Latent variable,Artificial intelligence,Robot,Machine learning,Robotics,Reinforcement learning
DocType
Citations 
PageRank 
Conference
20
0.63
References 
Authors
0
5
Name
Order
Citations
PageRank
Hausman, K.111911.92
Jost Tobias Springenberg2112662.86
Ziyu Wang337223.71
Nicolas Heess4176294.77
Martin Riedmiller55655366.29