Title
Multiplicative Controller Fusion - Leveraging Algorithmic Priors for Sample-efficient Reinforcement Learning and Safe Sim-To-Real Transfer.
Abstract
Learning-based approaches often outperform hand-coded algorithmic solutions for many problems in robotics. However, learning long-horizon tasks on real robot hardware can be intractable, and transferring a learned policy from simulation to reality is still extremely challenging. We present a novel approach to model-free reinforcement learning that can leverage existing sub-optimal solutions as an algorithmic prior during training and deployment. During training, our gated fusion approach enables the prior to guide the initial stages of exploration, increasing sample-efficiency and enabling learning from sparse long-horizon reward signals. Importantly, the policy can learn to improve beyond the performance of the sub-optimal prior since the prior’s influence is annealed gradually. During deployment, the policy’s uncertainty provides a reliable strategy for transferring a simulation-trained policy to the real world by falling back to the prior controller in uncertain states. We show the efficacy of our Multiplicative Controller Fusion approach on the task of robot navigation and demonstrate safe transfer from simulation to the real world without any fine-tuning. The code for this project is made publicly available at https://sites.google.com/view/mcf-nav/home
Year
DOI
Venue
2020
10.1109/IROS45743.2020.9341372
IROS
DocType
Citations 
PageRank 
Conference
1
0.36
References 
Authors
20
5
Name
Order
Citations
PageRank
Krishan Rana111.03
Vibhavari Dasagi211.37
Ben Talbot321.72
Michael Milford4122184.09
Niko Sünderhauf544932.94