Title
Text Guided Person Image Synthesis
Abstract
This paper presents a novel method to manipulate the visual appearance (pose and attribute) of a person image according to natural language descriptions. Our method can be boiled down to two stages: 1) text guided pose generation and 2) visual appearance transferred image synthesis. In the first stage, our method infers a reasonable target human pose based on the text. In the second stage, our method synthesizes a realistic and appearance transferred person image according to the text in conjunction with the target pose. Our method extracts sufficient information from the text and establishes a mapping between the image space and the language space, making generating and editing images corresponding to the description possible. We conduct extensive experiments to reveal the effectiveness of our method, as well as using the VQA Perceptual Score as a metric for evaluating the method. It shows for the first time that we can automatically edit the person image from the natural language descriptions.
Year
DOI
Venue
2019
10.1109/CVPR.2019.00378
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
Field
DocType
Volume
Pattern recognition,Computer science,Image synthesis,Natural language,Artificial intelligence,Perception,Visual appearance
Journal
abs/1904.05118
ISSN
Citations 
PageRank 
1063-6919
6
0.40
References 
Authors
0
6
Name
Order
Citations
PageRank
Xingran Zhou171.08
Siyu Huang2417.23
Li B3144.94
Yingming Li45714.82
Jiachen Li560.40
Zhongfei (Mark) Zhang62451164.30