Title
Text2scene: Generating Compositional Scenes From Textual Descriptions
Abstract
In this paper we propose Text2Scene, a model that generates various forms of compositional scene representations from natural language descriptions. Unlike recent works, our method does NOT use Generative Adversarial Networks (GANs). Text2Scene instead learns to sequentially generate objects and their attributes(location,size, appearance, etc) at every time step by attending to different parts of the input text and the current status of the generated scene. We show that under minor modifications, the proposed framework can handle the generation of different forms of scene representations,including cartoon-like scenes, object layouts corresponding to real images, and synthetic images. Our method is not only competitive when compared with state-of-the-art GAN-based methods using automatic metrics and superior based on human judgments but also has the advantage of producing interpretable results.
Year
DOI
Venue
2019
10.1109/CVPR.2019.00687
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
Field
DocType
ISSN
Pattern recognition,Computer science,Natural language,Artificial intelligence,Generative grammar,Real image
Conference
1063-6919
Citations 
PageRank 
References 
3
0.37
26
Authors
3
Name
Order
Citations
PageRank
Fuwen Tan182.47
Song Feng228019.55
Vicente Ordonez3141869.65