Title
Rag: Facial Attribute Editing By Learning Residual Attributes
Abstract
Facial attribute editing aims to modify face images in the desired manner, such as changing hair color, gender, and age, adding or removing eyeglasses, and so on. Recent researches on this topic largely leverage the adversarial loss so that the generated faces are not only realistic but also well correspond to the target attributes. In this paper, we propose Residual Attribute Generative Adversarial Network (RAG), a novel model to achieve unpaired editing for multiple facial attributes. Instead of directly learning the target attributes, we propose to learn the residual attributes, a more intuitive and understandable representation to convert the original task as a problem of arithmetic addition or subtraction for different attributes. Furthermore, we propose the identity preservation loss, which proves to facilitate convergence and provide better results. At last, we leverage effective visual attention to localize the related regions and preserve the unrelated content during transformation. The extensive experiments on two facial attribute datasets demonstrate the superiority of our approach to generate realistic and high-quality faces for multiple attributes. Visualization of the residual image, which is defined as the difference between the original image and the generated result, better explains which regions RAG focuses on when editing different attributes.
Year
DOI
Venue
2019
10.1109/ACCESS.2019.2924959
IEEE ACCESS
Keywords
Field
DocType
Computer vision, deep learning, facial attribute editing, generative adversarial nets
Convergence (routing),Residual,Generative adversarial network,Computer science,Visualization,Visual attention,Artificial intelligence,Subtraction,Machine learning,Distributed computing,Adversarial system
Journal
Volume
ISSN
Citations 
7
2169-3536
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Honglun Zhang153.10
Wenqing Chen200.34
Jidong Tian301.35
Hao He499.34
Yaohui Jin514329.65