Title
Visual Question Answering In The Medical Domain Based On Deep Learning Approaches: A Comprehensive Study
Abstract
Visual Question Answering (VQA) in the medical domain has attracted more attention from research communities in the last few years due to its various applications. This paper investigates several deep learning approaches in building a medical VQA system based on ImageCLEF's VQA-Med dataset, which consists of about 4K images with about 15K question-answer pairs. Due to the wide variety of the im-ages and questions included in this dataset, the proposed model is a hierarchical one consisting of many sub-models, each tailored to handle certain questions. For that, a special model is built to classify the questions into four categories, where each category is handled by a separate sub-model. At their core, all of these models consist of pre-trained Convolution Neural Networks (CNN). In order to get the best results, extensive experiments are performed and various techniques are employed including Data Aug-mentation (DA), Multi-Task Learning (MTL), Global Average Pooling (GAP), Ensembling, and Sequence to Sequence (Seq2Seq) models. Overall, the final model achieves 60.8 accuracy and 63.4 BLEU score, which are competitive with the state-of-the-art results despite using less demanding and simpler sub-models. (c) 2021 Elsevier B.V. All rights reserved.
Year
DOI
Venue
2021
10.1016/j.patrec.2021.07.002
PATTERN RECOGNITION LETTERS
Keywords
DocType
Volume
Medical visual question answering, Planes questions, Organ systems questions, Modality questions, Abnormality questions, Transfer learning, Data augmentation, Multi-Task learning, Global average pooling, Ensemble
Journal
150
ISSN
Citations 
PageRank 
0167-8655
1
0.36
References 
Authors
0
4
Name
Order
Citations
PageRank
Aisha Al-Sadi110.36
Mahmoud Al-Ayyoub273063.41
Yaser Jararweh396888.95
Fumie Costen410.36