Title
Bridge-GAN: Interpretable Representation Learning for Text-to-Image Synthesis
Abstract
Text-to-image synthesis is to generate images with the consistent content as the given text description, which is a highly challenging task with two main issues: visual reality and content consistency. Recently, it is available to generate images with high visual reality due to the significant progress of generative adversarial networks. However, translating text description to image with high content consistency is still ambitious. For addressing the above issues, it is reasonable to establish a transitional space with interpretable representation as a bridge to associate text and image. So we propose a text-to-image synthesis approach named Bridge-like Generative Adversarial Networks (Bridge-GAN). Its main contributions are: (1) A transitional space is established as a bridge for improving content consistency, where the interpretable representation can be learned by guaranteeing the key visual information from given text descriptions. (2) A ternary mutual information objective is designed for optimizing the transitional space and enhancing both the visual reality and content consistency. It is proposed under the goal to disentangle the latent factors conditioned on text description for further interpretable representation learning. Comprehensive experiments on two widely-used datasets verify the effectiveness of our Bridge-GAN with the best performance.
Year
DOI
Venue
2020
10.1109/TCSVT.2019.2953753
IEEE Transactions on Circuits and Systems for Video Technology
Keywords
DocType
Volume
Text-to-image synthesis,interpretable representation learning,Bridge-GAN
Journal
30
Issue
ISSN
Citations 
11
1051-8215
4
PageRank 
References 
Authors
0.40
10
2
Name
Order
Citations
PageRank
Mingkuan Yuan1713.75
Yuxin Peng2112274.90