Title
AMAM: An Attention-based Multimodal Alignment Model for Medical Visual Question Answering
Abstract
Medical Visual Question Answering (VQA) is a multimodal task to answer clinical questions about medical images. Existing methods have achieved good performance, but most medical VQA models focus on visual contents while ignoring the influence of textual contents. To address this issue, this paper proposes an Attention-based Multimodal Alignment Model (AMAM) for medical VQA, aiming for an alignment of text-based and image-based attention to enrich the textual features. First, we develop an Image-to-Question (I2Q) attention and a Word-to-Question (W2Q) attention to model the relations of both visual and textual contents to the question. Second, we design a composite loss composed of a classification loss and an Image–Question Complementary (IQC) loss. The IQC loss concentrates on aligning the importance of the questions learned from visual and textual features to emphasize meaningful words in questions and improve the quality of predicted answers. Benefiting from the attention mechanisms and the composite loss, AMAM obtains rich semantic textual information and accurate answers. Finally, due to some data errors and missing labels on the VQA-RAD dataset, we further constructed an enhanced dataset, VQA-RADPh, to raise data quality. Experimental results on public datasets show better performance of AMAM compared with the advanced methods. Our source code is available at: https://github.com/shuning-ai/AMAM/tree/master.
Year
DOI
Venue
2022
10.1016/j.knosys.2022.109763
Knowledge-Based Systems
Keywords
DocType
Volume
Attention mechanism,Deep learning,Medical Visual Question Answering,Multimodal fusion,Medical images
Journal
255
ISSN
Citations 
PageRank 
0950-7051
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Haiwei Pan15221.31
Shuning He200.34
Kejia Zhang300.34
Bo Qu400.34
Chunling Chen500.34
Kun Shi67611.50