Title
Learning to Group: A Bottom-Up Framework for 3D Part Discovery in Unseen Categories
Abstract
We address the problem of learning to discover 3D parts for objects in unseen categories. Being able to learn the geometry prior of parts and transfer this prior to unseen categories pose fundamental challenges on data-driven shape segmentation approaches. Formulated as a contextual bandit problem, we propose a learning-based iterative grouping framework which learns a grouping policy to progressively merge small part proposals into bigger ones in a bottom-up fashion. At the core of our approach is to restrict the local context for extracting part-level features, which encourages the generalizability to novel categories. On a recently proposed large-scale fine-grained 3D part dataset, PartNet, we demonstrate that our method can transfer knowledge of parts learned from 3 training categories to 21 unseen testing categories without seeing any annotated samples. Quantitative comparisons against four strong shape segmentation baselines show that we achieve the state-of-the-art performance.
Year
Venue
Keywords
2020
ICLR
Shape Segmentation, Zero-Shot Learning, Learning Representations
DocType
Citations 
PageRank 
Conference
1
0.35
References 
Authors
39
7
Name
Order
Citations
PageRank
Tiange Luo160.75
Kaichun Mo22869.21
Zhiao Huang393.13
Siyu Hu413.06
Jiarui Xu592.48
Liwei Wang6127288.14
Hao Su77343302.07