Title
3D-Aware Semantic-Guided Generative Model for Human Synthesis.
Abstract
Generative Neural Radiance Field (GNeRF) models, which extract implicit 3D representations from 2D images, have recently been shown to produce realistic images representing rigid/semi-rigid objects, such as human faces or cars. However, they usually struggle to generate high-quality images representing non-rigid objects, such as the human body, which is of a great interest for many computer graphics applications. This paper proposes a 3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis, which combines a GNeRF with a texture generator. The former learns an implicit 3D representation of the human body and outputs a set of 2D semantic segmentation masks. The latter transforms these semantic masks into a real image, adding a realistic texture to the human appearance. Without requiring additional 3D information, our model can learn 3D human representations with a photo-realistic, controllable generation. Our experiments on the DeepFashion dataset show that 3D-SGAN significantly outperforms the most recent baselines. The code is available at https://github.com/zhangqianhui/3DSGAN.
Year
DOI
Venue
2022
10.1007/978-3-031-19784-0_20
European Conference on Computer Vision
Keywords
DocType
Citations 
Generative neural radiance fields,Human image generation
Conference
0
PageRank 
References 
Authors
0.34
0
7
Name
Order
Citations
PageRank
Jichao Zhang100.34
Enver Sangineto235326.77
Hao Tang333834.83
Aliaksandr Siarohin4283.06
zhun zhong510211.56
Nicu Sebe601.01
Wei Wang713114.16