Title
CoordGAN: Self-Supervised Dense Correspondences Emerge from GANs
Abstract
Recent advances show that Generative Adversarial Networks (GANs) can synthesize images with smooth variations along semantically meaningful latent directions, such as pose, expression, layout, etc. While this indicates that GANs implicitly learn pixel-level correspondences across images, few studies explored how to extract them explicitly. In this work, we introduce Coordinate GAN (CoordGAN), a structure-texture disentangled GAN that learns a dense correspondence map for each generated image. We represent the correspondence maps of different images as warped coordinate frames transformed from a canonical coordinate frame, i.e., the correspondence map, which describes the structure (e.g., the shape of a face), is controlled via a transformation. Hence, finding correspondences boils down to locating the same coordinate in different correspondence maps. In CoordGAN, we sample a transformation to represent the structure of a synthesized instance, while an independent texture branch is responsible for rendering appearance details orthogonal to the structure. Our approach can also extract dense correspondence maps for real images by adding an encoder on top of the generator. We quantitatively demonstrate the quality of the learned dense correspondences through segmentation mask transfer on multiple datasets. We also show that the proposed generator achieves better structure and texture disentanglement compared to existing approaches. Project page: https://jitengmu.github.io/CoordGAN/
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.00977
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Self-& semi-& meta- Image and video synthesis and generation
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Jiteng Mu100.34
Shalini Gupta229920.42
Zhiding Yu342130.08
Nuno Vasconcelos400.34
Xiaolong Wang571339.04
Jan Kautz63615198.77
Sifei Liu722717.54