Abstract | ||
---|---|---|
Generative Neural Radiance Field (GNeRF) models, which extract implicit 3D representations from 2D images, have recently been shown to produce realistic images representing rigid/semi-rigid objects, such as human faces or cars. However, they usually struggle to generate high-quality images representing non-rigid objects, such as the human body, which is of a great interest for many computer graphics applications. This paper proposes a 3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis, which combines a GNeRF with a texture generator. The former learns an implicit 3D representation of the human body and outputs a set of 2D semantic segmentation masks. The latter transforms these semantic masks into a real image, adding a realistic texture to the human appearance. Without requiring additional 3D information, our model can learn 3D human representations with a photo-realistic, controllable generation. Our experiments on the DeepFashion dataset show that 3D-SGAN significantly outperforms the most recent baselines. The code is available at https://github.com/zhangqianhui/3DSGAN. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1007/978-3-031-19784-0_20 | European Conference on Computer Vision |
Keywords | DocType | Citations |
Generative neural radiance fields,Human image generation | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jichao Zhang | 1 | 0 | 0.34 |
Enver Sangineto | 2 | 353 | 26.77 |
Hao Tang | 3 | 338 | 34.83 |
Aliaksandr Siarohin | 4 | 28 | 3.06 |
zhun zhong | 5 | 102 | 11.56 |
Nicu Sebe | 6 | 0 | 1.01 |
Wei Wang | 7 | 131 | 14.16 |