Title
Learning semantic priors for texture-realistic sketch-to-image synthesis
Abstract
Sketch-to-image synthesis is a challenging task in the field of computer vision that generates photo-realistic images from given sketches. Existing methods of this kind are unable to discover the inherent semantic information contained in an image and use it to guide the synthesis process, substantially reduce their capacity to generate photo-realistic images. Accordingly, in this paper, we propose a novel framework that explores and leverages semantic information to generate realistic textures in synthesized images for this task. More specifically, the segmentation maps generation network is designed to learn the relationships between sketches and segmentation maps in order to obtain the semantic segmentation maps from the sketches. Taking semantic segmentation maps as the condition, a feature-wise affine transformation is then executed to change the feature maps of intermediate layers in the network, which can efficiently generate the texture required to synthesize more photo-realistic images. Extensive experiments demonstrate that when compared to other state-of-the-art sketch-to-image synthesis methods, our approach can not only synthesize images with significantly superior visual quality but is also able to achieve better results on quantitative metrics.
Year
DOI
Venue
2021
10.1016/j.neucom.2021.08.085
Neurocomputing
Keywords
DocType
Volume
Image-to-image translation,Deep learning,GANs,Sketch-based image synthesis
Journal
464
ISSN
Citations 
PageRank 
0925-2312
1
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Zeyu Li11512.07
Cheng Deng2128385.48
Kun Wei3124.55
Wei Liu44041204.19
Dacheng Tao519032747.78