Title
SingleDemoGrasp: Learning to Grasp From a Single Image Demonstration
Abstract
Learning-based grasping models typically require a large amount of training data and training time to generate an effective grasping model. Alternatively, small non-generic grasp models have been proposed that are tailored to specific objects by, for example, directly predicting the object’s location in 2/3D space, and determining suitable grasp poses by post processing. In both cases, data generation is a bottleneck, as this needs to be separately collected and annotated for each individual object and image. In this work, we tackle these issues and propose a grasping model that is developed in four main steps: 1. Visual object grasp demonstration, 2. Data augmentation, 3. Grasp detection model training and 4. Robot grasping action. Four different vision-based grasp models are evaluated with industrial and 3D printed objects, robot and standard gripper, in both simulation and real environments. The grasping model is implemented in the OpenDR toolkit at: https://github.com/opendr-eu/opendr/tree/master/projects/control/single_demo_grasp.
Year
DOI
Venue
2022
10.1109/CASE49997.2022.9926463
2022 IEEE 18th International Conference on Automation Science and Engineering (CASE)
Keywords
DocType
ISSN
Grasping,Deep Learning in Grasping and Manipulation,Perception for Grasping and Manipulation
Conference
2161-8070
ISBN
Citations 
PageRank 
978-1-6654-9043-6
0
0.34
References 
Authors
10
4
Name
Order
Citations
PageRank
Amir Mehman Sefat100.34
Alexandre Angleraud200.34
Esa Rahtu383252.76
Roel Pieters400.34