Title
Multi-Type Self-Attention Guided Degraded Saliency Detection
Abstract
Existing saliency detection techniques are sensitive to image quality and perform poorly on degraded images. In this paper, we systematically analyze the current status of the research on detecting salient objects from degraded images and then propose a new multi-type self-attention network, namely MSANet, for degraded saliency detection. The main contributions include: 1) Applying attention transfer learning to promote semantic detail perception and internal feature mining of the target network on degraded images; 2) Developing a multi-type self-attention mechanism to achieve the weight recalculation of multi-scale features. By computing global and local attention scores, we obtain the weighted features of different scales, effectively suppress the interference of noise and redundant information, and achieve a more complete boundary extraction. The proposed MSANet converts low-quality inputs to high-quality saliency maps directly in an end-to-end fashion. Experiments on seven widely-used datasets show that our approach produces good performance on both clear and degraded images.
Year
Venue
DocType
2020
AAAI
Conference
Volume
ISSN
Citations 
34
2159-5399
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Ziqi Zhou151.76
Zheng Wang2434.79
Huchuan Lu34827186.26
Song Wang495479.55
Meijun Sun57411.77