Title
Visual Grounding Via Accumulated Attention
Abstract
Visual grounding (VG) aims to locate the most relevant object or region in an image, based on a natural language query. Generally, it requires the machine to first understand the query, identify the key concepts in the image, and then locate the target object by specifying its bounding box. However, in many real-world visual grounding applications, we have to face with ambiguous queries and images with complicated scene structures. Identifying the target based on highly redundant and correlated information can be very challenging, and often leading to unsatisfactory performance. To tackle this, in this paper, we exploit an attention module for each kind of information to reduce internal redundancies. We then propose an accumulated attention (A-ATT) mechanism to reason among all the attention modules jointly. In this way, the relation among different kinds of information can be explicitly captured. Moreover, to improve the performance and robustness of our VG models, we additionally introduce some noises into the training procedure to bridge the distribution gap between the human-labeled training data and the real-world poor quality data. With this “noised” training strategy, we can further learn a bounding box regressor, which can be used to refine the bounding box of the target object. We evaluate the proposed methods on four popular datasets (namely ReferCOCO, ReferCOCO+, ReferCOCOg, and GuessWhat?!). The experimental results show that our methods significantly outperform all previous works on every dataset in terms of accuracy.
Year
DOI
Venue
2022
10.1109/TPAMI.2020.3023438
IEEE Transactions on Pattern Analysis and Machine Intelligence
Keywords
DocType
Volume
Visual grounding,accumulated attention,noised training strategy,bounding box regression
Journal
44
Issue
ISSN
Citations 
3
0162-8828
0
PageRank 
References 
Authors
0.34
9
6
Name
Order
Citations
PageRank
Chaorui Deng1613.39
Qi Wu239641.54
Wu Qingyao3231.65
Fan Lyu463.22
Fuyuan Hu501.01
Mingkui Tan650138.31