Title
Context Propagation Embedding Network For Weakly Supervised Semantic Segmentation
Abstract
Weakly-supervised semantic segmentation with image-level labels has much significance as it facilitates related practical applications under lightweight manual annotations. Recent researches first infer the visual cues, referred as discriminative regions corresponding to each object category on images using deep convolutional classification networks. Then they expand visual cues to generate initial segmentation masks. Despite the remarkable progress, the segmentation performance still remains unsatisfactory due to the absence of complete visual cues. Using these low-quality visual cues as prior will have the limitation on improving segmentation performance. To overcome this problem, we propose a novel context propagation embedding network, i.e., the CPENet to generate high-quality visual cues, which focuses on learning semantic relationship between each region and its surrounding neighbors and selectively propagates discriminative information to non-discriminative, object related regions. Our methods can provide reliable initial segmentation masks for training subsequent segmentation network to generate final segmentation results. In addition, we refined convolutional block attention module (CBAM) [30] to hierarchically extract more category-aware features by capturing global context related information and further promote the propagation process. Experiments on benchmark demonstrate that our proposed method obtains superior performance over the state-of-the-arts.
Year
DOI
Venue
2020
10.1007/s11042-020-08787-9
MULTIMEDIA TOOLS AND APPLICATIONS
Keywords
DocType
Volume
Weakly supervised, Semantic segmentation, Context propagation, High-quality visual cues
Journal
79
Issue
ISSN
Citations 
45-46
1380-7501
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Yajun Xu100.34
Zhendong Mao230725.18
Zhineng Chen319225.29
Xin Wen400.34
Yangyang Li500.68