Abstract | ||
---|---|---|
This work focuses on the problem of editing facial images by manipulating specified attributes of interest. To learn latent representations disentangled with respect to specified face attribute, a novel attribute-disentangled generative model is proposed by combining variational autoencoders (VAEs) and generative adversarial networks (GANs). In the proposed model, only two deep mappings are included: an encoder and a decoder, similarly as the counterparts in the context of VAEs. Latent space mapped by the encoder is split into two parts: style space and attribute space. The former represents attribute-irrelevant factors, such as identity, position, illumination and background, etc. The latter represents the attributes, such as hair color, gender, with or without glasses, etc, of which each dimension represents one single attribute. By regarding constraints on the output of the encoder as discriminative objectives, the encoder can act not only as a discriminator that is expected to discriminate a sample is a real or a generated one, but also as an attribute classifier that can discriminate whether a sample has the specified attributes or not. Combining reconstruction and Kullback-Leibler (KL) divergence regularization losses like in VAEs, the adversarial training loss defined for the style and the attribute in the latent space is introduced, which drives the proposed model to generate images whose distribution are close to the real data distribution in the latent space. Finally, the model was evaluated on the CelebA dataset and experimental results showed its effectiveness in disentangling face attributes and generating high-quality face images. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/ICPR.2018.8545633 | 2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) |
Field | DocType | ISSN |
Iterative reconstruction,Discriminator,Pattern recognition,Computer science,Regularization (mathematics),Artificial intelligence,Encoder,Decoding methods,Classifier (linguistics),Discriminative model,Generative model | Conference | 1051-4651 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Defang Li | 1 | 0 | 0.68 |
Min Zhang | 2 | 134 | 38.40 |
Weifu Chen | 3 | 1 | 4.09 |
Guo-Can Feng | 4 | 1 | 3.74 |