Title
Safe- to-Explore State Spaces: Ensuring Safe Exploration in Policy Search with Hierarchical Task Optimization
Abstract
Policy search reinforcement learning allows robots to acquire skills by themselves. However, the learning procedure is inherently unsafe as the robot has no a-priori way to predict the consequences of the exploratory actions it takes. Therefore, exploration can lead to collisions with the potential to harm the robot and/or the environment. In this work we address the safety aspect by constraining the exploration to happen in safe-to-explore state spaces. These are formed by decomposing target skills (e.g., grasping) into higher ranked sub-tasks (e.g., collision avoidance, joint limit avoidance) and lower ranked movement tasks (e.g., reaching). Sub-tasks are defined as concurrent controllers (policies) in different operational spaces together with associated Jacobians representing their joint-space mapping. Safety is ensured by only learning policies corresponding to lower ranked sub-tasks in the redundant null space of higher ranked ones. As a side benefit, learning in sub-manifolds of the state-space also facilitates sample efficiency. Reaching skills performed in simulation and grasping skills performed on a real robot validate the usefulness of the proposed approach.
Year
DOI
Venue
2018
10.1109/HUMANOIDS.2018.8624948
2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)
Keywords
DocType
ISSN
safe- to-explore state spaces,hierarchical task optimization,policy search reinforcement learning,learning procedure,exploratory actions,safety aspect,collision avoidance,joint limit avoidance,lower ranked movement tasks,joint-space mapping,reaching skills,grasping skills
Conference
2164-0572
ISBN
Citations 
PageRank 
978-1-5386-7284-6
0
0.34
References 
Authors
11
5
Name
Order
Citations
PageRank
Jens Lundell154.13
Robert Krug2215.59
Erik Schaffernicht311.70
Todor Stoyanov426026.07
V. Kyrki565261.79