Title
Learning Multi-Object Dense Descriptor For Autonomous Goal-Conditioned Grasping
Abstract
In a goal-conditioned grasping task, a robot is asked to grasp the objects designated by a user. Existing methods for goal-conditioned grasping either can only handle relatively simple scenes or require extra user annotations. This letter proposes an autonomous method to enable the grasping of target object in a challenging yet general scene that contains multiple objects of different classes. It can effectively learn a dense descriptor and integrate it with a newly designed grasp affordance model. The proposed method is a self-supervised pipeline trained without any human supervision or robotic sampling. We validate our method via both simulated and real-world experiments while the training relies only on a variety of synthetic data, demonstrating a good generalization capability. Supplementary video demonstrations and material are available at https://vsislab.github.io/agcg/.
Year
DOI
Venue
2021
10.1109/LRA.2021.3062300
IEEE ROBOTICS AND AUTOMATION LETTERS
Keywords
DocType
Volume
Perception for Grasping manipulation, representation learning
Journal
6
Issue
ISSN
Citations 
2
2377-3766
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Shuo Yang121.37
Wei Zhang210333.19
Ran Song35311.70
Jiyu Cheng402.03
Yibin Li522659.56