Title
SPCA-Net: a based on spatial position relationship co-attention network for visual question answering
Abstract
Recently, the latest method of VQA (visual question answering) mainly relies on the co-attention to link each visual object with the text object, which can achieve a rough interaction between multiple models. However, VQA models tend to focus on the association between visual and language features without considering the spatial relationship between image region features extracted by Faster R-CNN. This paper proposes an effective deep co-attention network to solve this problem. As a first step, BERT was introduced in order to better capture the relationship between words and make the extracted text feature more robust; secondly, a multimodal co-attention based on spatial location relationship was proposed in order to realize fine-grained interactions between question and image. It consists of three basic components: the text self-attention unit, the image self-attention unit, and the question-guided-attention unit. The self-attention mechanism of image visual features integrates information about the spatial position and width/height of the image area after obtaining attention so that each image area is aware of the relative location and size of other areas. Our experiment results indicate that our model is significantly better than other existing models.
Year
DOI
Venue
2022
10.1007/s00371-022-02524-z
The Visual Computer
Keywords
DocType
Volume
BERT, Guided-attention, Self-attention, Faster R-CNN, Spatial position relationship
Journal
38
Issue
ISSN
Citations 
9
0178-2789
0
PageRank 
References 
Authors
0.34
9
4
Name
Order
Citations
PageRank
Yan Feng100.34
Silamu Wushouer200.34
Li Yanbin300.34
Chai Yachuang400.34