Title
MAFNet: Multi-style attention fusion network for salient object detection
Abstract
Salient object detection based on deep learning has become one of the research hotspots in computer vision. How to effectively extract useful information is a key issue for saliency detection. Most of the existing methods integrate features extracted from convolutional neural networks indiscriminately. However, the features of different layers have different characteristics, not all of them are useful for saliency detection and some even cause interferences. To solve above problem, we propose a Multi-style Attention Fusion Network (MAFNet). Specifically, MAFNet is mainly consists of dual-cues spatial attention module (DSA), dual attention intermediate representation module (DAIR), high-level channel attention module (HCA) and multi-level feature fusion module (MLFF). Among them, DSA aims to refine low-level features and filter background noise. DAIR uses two branches to adaptively integrate the spatial and semantic information of middle-level features. HCA obtains the semantic features of high-level blocks through two different channel-wise operations. Besides, MLFF effectively integrates the above multi-level features in a learnable manner. Finally, different from cross-entropy, cross-IOU loss guides the network to pay more attention to local details. Experimental results on six public datasets demonstrate that MAFNet’s outperformance is competitive in saliency detection and performs well on small object detection.
Year
DOI
Venue
2021
10.1016/j.neucom.2020.09.033
Neurocomputing
Keywords
DocType
Volume
Salient object detection,Multi-style attention,Learning-based feature fusion,Deep neural network
Journal
422
ISSN
Citations 
PageRank 
0925-2312
3
0.41
References 
Authors
0
5
Name
Order
Citations
PageRank
Yanhua Liang131.76
Guihe Qin2239.00
Ming-Hui Sun383.83
Jie Yan431.42
Huiming Jiang530.41