Title
3D Object Reconstruction from a Single Depth View with Adversarial Learning
Abstract
In this paper, we propose a novel 3D-RecGAN approach, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike the existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid by filling in the occluded/missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets show that the proposed 3D-RecGAN significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects. Our code and data are available at: https://github.com/Yang7879/3D-RecGAN.
Year
DOI
Venue
2017
10.1109/ICCVW.2017.86
2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
Keywords
DocType
Volume
single depth view,3D-RecGAN approach,voxel grid representation,occluded/missing regions,high-dimensional voxel space,https://github.com/Yang7879/3D-RecGAN,3D object reconstruction,generative adversarial networks framework,class labels,GAN,synthetic datasets,adversarial learning,3D geometry,Poisson algorithm,autoencoders,fine-grained 3D structures
Conference
abs/1708.07969
Issue
ISSN
ISBN
1
2473-9936
978-1-5386-1035-0
Citations 
PageRank 
References 
20
0.74
35
Authors
6
Name
Order
Citations
PageRank
B. Yang1697.63
Hongkai Wen231325.88
Sen Wang327921.15
Ronald Clark41319.10
Andrew Markham551948.34
Niki Trigoni6116085.23