Title
Multi-Channel Attention Selection Gan With Cascaded Semantic Guidance For Cross-View Image Translation
Abstract
Cross-view image translation is challenging because it involves images with drastically different views and severe deformation. In this paper, we propose a novel approach named Multi-Channel Attention SelectionGAN (SelectionGAN) that makes it possible to generate images of natural scenes in arbitrary viewpoints, based on an image of the scene and a novel semantic map. The proposed SelectionGAN explicitly utilizes the semantic information and consists of two stages. In the first stage, the condition image and the target semantic map are fed into a cycled semantic-guided generation network to produce initial coarse results. In the second stage, we refine the initial results by using a multi-channel attention selection mechanism. Moreover, uncertainty maps automatically learned from attentions are used to guide the pixel loss for better network optimization. Extensive experiments on Dayton, CVUSA and Ego2Top datasets show that our model is able to generate significantly better results than the state-of-the-art methods.
Year
DOI
Venue
2019
10.1109/CVPR.2019.00252
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
Field
DocType
Volume
Image translation,Pattern recognition,Computer science,Source code,Viewpoints,Semantic information,Multi channel,Artificial intelligence,Pixel,Semantic map
Journal
abs/1904.06807
ISSN
Citations 
PageRank 
1063-6919
4
0.38
References 
Authors
0
6
Name
Order
Citations
PageRank
Hao Tang133834.83
Dan Xu234216.39
Nicu Sebe37013403.03
Yanzhi Wang41082136.11
Corso Jason J.5144292.44
Yan Yan669131.13