Title
PGNet: A Part-based Generative Network for 3D object reconstruction
Abstract
Deep-learning generative methods have developed rapidly. For example, various single- and multi-view generative methods for meshes, voxels, and point clouds have been introduced. However, most 3D single-view reconstruction methods generate whole objects at one time, or in a cascaded way for dense structures, which misses local details of fine-grained structures. These methods are useless when the generative models are required to provide semantic information for parts. This paper proposes an efficient part-based recurrent generative network, which aims to generate object parts sequentially with the input of a single-view image and its semantic projection. The advantage of our method is its awareness of part structures; hence it generates more accurate models with fine-grained structures. Experiments show that our method attains high accuracy compared with other point set generation methods, particularly toward local details.
Year
DOI
Venue
2020
10.1016/j.knosys.2020.105574
Knowledge-Based Systems
Keywords
DocType
Volume
3D reconstruction,Point cloud generation,Part-based,Semantic reconstruction
Journal
194
ISSN
Citations 
PageRank 
0950-7051
1
0.35
References 
Authors
0
8
Name
Order
Citations
PageRank
Yang Zhang110.35
Kai Huo210.68
Zhen Liu3267.75
Yu Zang4749.22
Yongxiang Liu510.35
Xiang Li621.04
Qianyu Zhang710.35
Cheng Wang811829.56