Title
Attn-Eh Aln: Complex Text-To-Image Generation With Attention-Enhancing Adversarial Learning Networks
Abstract
Text-to-image generation can be widely applied in various fields, such as scene retrieval and computer-aided design. The existing approaches can generate realistic images from simple text descriptions, whereas rendering images from complex text descriptions is still not satisfactory for practical applications. To generate accurate high-resolution images from given complex texts, we proposed an attention-enhancing adversarial learning network (Attn-Eh ALN) based upon conditional generative adversarial networks and the attention mechanism. This model consists of an encoding module and a generative module. In the encoding module, we proposed a local attention driven encoding network that allows words in the text with different weights to enhance the semantic representation of specific object features. The attention mechanism is employed to capture more details while ensuring global information. This enables the details in the generated images to be more fine-grained. In the discriminating stage, we take multiple discriminators to distinguish the realness of the generated images, avoiding the bias caused by a single discriminator. Moreover, a semantic similarity judgment module is introduced to improve the semantic consistency between the text description and visual content. Experimental results on benchmark datasets indicate that Attn-Eh ALN generates favorable results in comparison with other state-of-the-art methods from qualitative and quantitative assessments. (C) 2020 SPIE and IS&T
Year
DOI
Venue
2020
10.1117/1.JEI.29.6.063014
JOURNAL OF ELECTRONIC IMAGING
Keywords
DocType
Volume
image generation, conditional generative adversarial networks, semantic consistency, local attention mechanism, complex text description
Journal
29
Issue
ISSN
Citations 
6
1017-9909
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Cunyi Lin100.34
Xianwei Rong200.34
Ming Liu337744.40
Xiaoyan Yu400.34