Abstract | ||
---|---|---|
6D grasping in cluttered scenes is a longstanding problem in robotic manipulation. Open-loop manipulation pipelines may fail due to inaccurate state estimation, while most end-to-end grasping methods have not yet scaled to complex scenes with obstacles. In this work, we propose a new method for end-to-end learning of 6D grasping in cluttered scenes. Our hierarchical framework learns collision-free target-driven grasping based on partial point cloud observations. We learn an embedding space to encode expert grasping plans during training and a variational autoencoder to sample diverse grasping trajectories at test time. Furthermore, we train a critic network for plan selection and an option classifier for switching to an instance grasping policy through hierarchical reinforcement learning. We evaluate and analyze our method and compare against several baselines in simulation, and demonstrate that our latent planning can generalize to real-world cluttered-scene grasping tasks.(1) |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/LRA.2022.3143198 | IEEE ROBOTICS AND AUTOMATION LETTERS |
Keywords | DocType | Volume |
Deep learning in grasping and manipulation, sensorimotor learning | Journal | 7 |
Issue | ISSN | Citations |
2 | 2377-3766 | 0 |
PageRank | References | Authors |
0.34 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Lirui Wang | 1 | 0 | 0.34 |
Xiangyun Meng | 2 | 0 | 0.34 |
Yu Xiang | 3 | 629 | 23.04 |
Dieter Fox | 4 | 12306 | 1289.74 |