Title
RGB Matters: Learning 7-DoF Grasp Poses on Monocular RGBD Images
Abstract
General object grasping is an important yet unsolved problem in the field of robotics. Most of the current methods either generate grasp poses with few DoF that fail to cover most of the success grasps, or only take the unstable depth image or point cloud as input which may lead to poor results in some cases. In this paper, we propose RGBD-Grasp, a pipeline that solves this problem by decoupling 7-DoF grasp detection into two sub-tasks where RGB and depth information are processed separately. In the first stage, an encoder-decoder like convolutional neural network Angle-View Net(AVN) is proposed to predict the SO(3) orientation of the gripper at every location of the image. Consequently, a Fast Analytic Searching(FAS) module calculates the opening width and the distance of the gripper to the grasp point. By decoupling the grasp detection problem and introducing the stable RGB modality, our pipeline alleviates the requirement for the high-quality depth image and is robust to depth sensor noise. We achieve state-of-the-art results on GraspNet-1Billion dataset compared with several baselines. Real robot experiments on a UR5 robot with an Intel Realsense camera and a Robotiq two-linger gripper show high success rates for both single object scenes and cluttered scenes. Our code and trained model are available at graspnet.net
Year
DOI
Venue
2021
10.1109/ICRA48506.2021.9561409
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021)
DocType
Volume
Issue
Conference
2021
1
ISSN
Citations 
PageRank 
1050-4729
0
0.34
References 
Authors
7
6
Name
Order
Citations
PageRank
Minghao Gou121.39
Haoshu Fang2576.86
Zhanda Zhu300.34
Sheng Xu401.01
Chenxi Wang501.01
Cewu Lu699362.08