Title
Dual guidance enhanced network for light field salient object detection
Abstract
Saliency detection models using light field data as input have not been thoroughly explored. Existing deep saliency models usually treat multi-focus images as independent information and extract their features separately, which may be cumbersome and over-rely on well-designed network structure. Besides, they do not fully explore the cross-modal complementarity and cross-level continuity of information, and rarely consider edge cues. Based on the above observations, in this paper, we investigate a novel Dual Guidance Enhanced Network (DGENet), which considers both spatial content and explicit boundary cues. Specifically, DGENet contains two key modules, i.e., the recurrent global-guided focus module (RGFM) and the boundary-guided semantic accumulation module (BSAM). These two modules are composed of multiple units, and the units in each module are not independent of each other. RGFM is used to distill out effective squeezed information of focal slices and RGB images between different levels. The learned global context features guide the network to focus on the salient region via a progressive reverse attention-driven strategy. Furthermore, BSAM introduces salient edge features to guide the accumulation of salient object features to generate salient maps with sharp boundaries. Extensive experiments on three challenging light field datasets demonstrate that our DGENet is superior to cutting-edge 2D, 3D and 4D methods.
Year
DOI
Venue
2022
10.1016/j.imavis.2021.104352
Image and Vision Computing
Keywords
DocType
Volume
Light field,Salient object detection,Convolutional neural network,Hierarchical interaction,Boundary-guided semantic accumulation
Journal
118
ISSN
Citations 
PageRank 
0262-8856
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Yanhua Liang131.76
Guihe Qin201.01
Minghui Sun301.69
Jun Qin401.35
Jie Yan531.42
Zhonghan Zhang601.35