Title
Cross-Domain Visual Attention Model Adaption with One-Shot GAN
Abstract
The state-of-the-art models for visual attention prediction perform well in common images. But in general, these models have a performance degradation when applied to another domain with conspicuous data distribution differences, such as solar images in this work. To address this issue and adopt these models from the common images to the sun, this paper proposes a new dataset, named VASUN, that records the free-viewing human attention on solar images. Based on this dataset, we propose a new cross-domain model adaption approach, which is a siamese feature extraction network with two discriminators and trained in a one-shot learning manner, to bridge the gaps between the source domain and target domain through the joint distribution space. Finally, we benchmark existing models as well as our work on VASUN and give some analysis about predicting visual attention on the sun. The results show that our method achieves state-of-the-art performance with only one labeled image in the target domain and contributes to the domain adaption task.
Year
DOI
Venue
2020
10.1109/MIPR49039.2020.00011
2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR)
Keywords
DocType
ISBN
siamese feature extraction network,one-shot learning,source domain,target domain,joint distribution space,Sun,cross-domain visual attention model adaption,one-shot GAN,visual attention prediction,free-viewing human attention,VASUN
Conference
978-1-7281-4273-9
Citations 
PageRank 
References 
0
0.34
16
Authors
5
Name
Order
Citations
PageRank
Daowei Li100.34
Kui Fu200.34
Y. Zhao327733.44
Long Xu44414.26
Jia Li552442.09