Title
Compositional Transfer in Hierarchical Reinforcement Learning
Abstract
The successful application of general reinforcement learning algorithms to real-world robotics applications is often limited by their high data requirements. We introduce Regularized Hierarchical Policy Optimization (RHPO) to improve data-efficiency for domains with multiple dominant tasks and ultimately reduce required platform time. To this end, we employ compositional inductive biases on multiple levels and corresponding mechanisms for sharing off-policy transition data across low-level controllers and tasks as well as scheduling of tasks. The presented algorithm enables stable and fast learning for complex, real-world domains in the parallel multitask and sequential transfer case. We show that the investigated types of hierarchy enable positive transfer while partially mitigating negative interference and evaluate the benefits of additional incentives for efficient, compositional task solutions in single task domains. Finally, we demonstrate substantial data-efficiency and final performance gains over competitive baselines in a week-long, physical robot stacking experiment.
Year
DOI
Venue
2020
10.15607/RSS.2020.XVI.054
Robotics - Science and Systems
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
10
Name
Order
Citations
PageRank
markus wulfmeier1516.86
Abbas Abdolmaleki24612.82
Roland Hafner3222.70
Jost Tobias Springenberg4112662.86
M. Neunert5659.95
Noah Siegel652.48
Tim Hertweck700.34
Thomas Lampe8212.33
Nicolas Heess9176294.77
Martin Riedmiller105655366.29