Title
MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware Ambidextrous Bin Picking via Physics-based Metaverse Synthesis
Abstract
Autonomous bin picking poses significant challenges to vision-driven robotic systems given the complexity of the problem, ranging from various sensor modalities, to highly entangled object layouts, to diverse item properties and gripper types. Existing methods often address the problem from one perspective. Diverse items and complex bin scenes require diverse picking strategies together with advanced reasoning. As such, to build robust and effective machine-learning algorithms for solving this complex task requires significant amounts of comprehensive and high quality data. Collecting such data in real world would be too expensive and time prohibitive and therefore intractable from a scalability perspective. To tackle this big, diverse data problem, we take inspiration from the recent rise in the concept of metaverses, and introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset constructed via physics-based metaverse synthesis. The pro-posed dataset contains 217k RGBD images across 82 different article types, with full annotations for object detection, amodal perception, keypoint detection, manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum gripper. We also provide a real dataset consisting of over 2.3k fully annotated high-quality RGBD images, divided into 5 levels of difficulties and an unseen object set to evaluate different object and layout properties. Finally, we conduct extensive experiments showing that our proposed vacuum seal model and synthetic dataset achieves state-of-the-art performance and generalizes to real world use-cases.
Year
DOI
Venue
2022
10.1109/CASE49997.2022.9926427
2022 IEEE 18th International Conference on Automation Science and Engineering (CASE)
Keywords
DocType
ISSN
MetaGraspNet,large-scale benchmark dataset,scene-aware ambidextrous bin picking,physics-based metaverse synthesis,autonomous bin picking,vision-driven robotic systems,sensor modalities,highly entangled object layouts,diverse item properties,gripper types,complex bin scenes,diverse picking strategies,advanced reasoning,robust machine-learning algorithms,effective machine-learning algorithms,complex task,comprehensive quality data,scalability perspective,big data problem,diverse data problem,large-scale photo-realistic bin picking dataset,82 different article types,object detection,parallel-jaw,vacuum gripper,unseen object,different object,layout properties,synthetic dataset
Conference
2161-8070
ISBN
Citations 
PageRank 
978-1-6654-9043-6
0
0.34
References 
Authors
8
5
Name
Order
Citations
PageRank
Maximilian Gilles100.34
Yuhao Chen200.34
Tim Robin Winter300.34
E. Zhixuan Zeng400.34
Alexander Wong535169.61