Title
Distributed Attention for Grounded Image Captioning
Abstract
ABSTRACTWe study the problem of weakly supervised grounded image captioning. That is, given an image, the goal is to automatically generate a sentence describing the context of the image with each noun word grounded to the corresponding region in the image. This task is challenging due to the lack of explicit fine-grained region word alignments as supervision. Previous weakly supervised methods mainly explore various kinds of regularization schemes to improve attention accuracy. However, their performances are still far from the fully supervised ones. One main issue that has been ignored is that the attention for generating visually groundable words may only focus on the most discriminate parts and can not cover the whole object. To this end, we propose a simple yet effective method to alleviate the issue, termed as partial grounding problem in our paper. Specifically, we design a distributed attention mechanism to enforce the network to aggregate information from multiple spatially different regions with consistent semantics while generating the words. Therefore, the union of the focused region proposals should form a visual region that encloses the object of interest completely. Extensive experiments have demonstrated the superiority of our proposed method compared with the state-of-the-arts.
Year
DOI
Venue
2021
10.1145/3474085.3475354
International Multimedia Conference
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
10
Name
Order
Citations
PageRank
Nenglun Chen143.79
Xingjia Pan201.01
Runnan Chen331.75
Lei Yang4122.87
Zhiwen Lin500.68
Yuqiang Ren600.68
Haolei Yuan700.34
Xiaowei Guo800.34
Feiyue Huang922641.86
Wenping Wang10151.90