Abstract | ||
---|---|---|
Deep-learning generative methods have developed rapidly. For example, various single- and multi-view generative methods for meshes, voxels, and point clouds have been introduced. However, most 3D single-view reconstruction methods generate whole objects at one time, or in a cascaded way for dense structures, which misses local details of fine-grained structures. These methods are useless when the generative models are required to provide semantic information for parts. This paper proposes an efficient part-based recurrent generative network, which aims to generate object parts sequentially with the input of a single-view image and its semantic projection. The advantage of our method is its awareness of part structures; hence it generates more accurate models with fine-grained structures. Experiments show that our method attains high accuracy compared with other point set generation methods, particularly toward local details. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1016/j.knosys.2020.105574 | Knowledge-Based Systems |
Keywords | DocType | Volume |
3D reconstruction,Point cloud generation,Part-based,Semantic reconstruction | Journal | 194 |
ISSN | Citations | PageRank |
0950-7051 | 1 | 0.35 |
References | Authors | |
0 | 8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yang Zhang | 1 | 1 | 0.35 |
Kai Huo | 2 | 1 | 0.68 |
Zhen Liu | 3 | 26 | 7.75 |
Yu Zang | 4 | 74 | 9.22 |
Yongxiang Liu | 5 | 1 | 0.35 |
Xiang Li | 6 | 2 | 1.04 |
Qianyu Zhang | 7 | 1 | 0.35 |
Cheng Wang | 8 | 118 | 29.56 |