Title
Goal Density-based Hindsight Experience Prioritization for Multi-Goal Robot Manipulation Reinforcement Learning
Abstract
Reinforcement learning for multi-goal robot manipulation tasks is usually challenging, especially when sparse rewards are provided. It often requires millions of data collected before a stable strategy is learned. Recent algorithms like Hindsight Experience Replay (HER) have accelerated the learning process greatly by replacing the original desired goal with one of the achieved points (substitute goals) alongside the same trajectory. However, the selection of previous experience to learn is naively sampled in HER, in which the trajectory selection and the substitute goal sampling is completely random. In this paper, we discuss an experience prioritization strategy for HER that improves the learning efficiency. We propose the Goal Density-based hindsight experience Prioritization (GDP) method that focuses on utilizing the density distribution of the achieved points and prioritizes achieved points which are rarely seen in the replay buffer. These points are used as substitute goals for HER. In addition, we propose an Prioritization Switching with Ensembling Strategy (PSES) method to switch different experience prioritization algorithms during learning, which allows to select the best performance during each learning stage. We evaluate our method with several OpenAI Gym robotic manipulation tasks. The results show that GDP accelerates the learning process in most tasks and can be improved when combining with other prioritization methods using PSES.
Year
DOI
Venue
2020
10.1109/RO-MAN47096.2020.9223473
2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
DocType
ISSN
ISBN
Conference
1944-9445
978-1-7281-6075-7
Citations 
PageRank 
References 
1
0.35
0
Authors
4
Name
Order
Citations
PageRank
Yingyi Kuang110.35
Abraham Itzhak Weinberg210.35
George Vogiatzis310.35
Diego R. Faria49514.96