Title
Transform-Retrieve-Generate: Natural Language-Centric Outside-Knowledge Visual Question Answering
Abstract
Outside-knowledge visual question answering (OK-VQA) requires the agent to comprehend the image, make use of relevant knowledge from the entire web, and digest all the information to answer the question. Most previous works address the problem by first fusing the image and question in the multi-modal space, which is inflexible for further fusion with a vast amount of external knowledge. In this paper, we call for an alternative paradigm for the OK-VQA task, which transforms the image into plain text, so that we can enable knowledge passage retrieval, and generative question-answering in the natural language space. This paradigm takes advantage of the sheer volume of gigantic knowledge bases and the richness of pretrained language models. A Transform-Retrieve-Generate framework (TRiG) framework is proposed 1 1 The code of this work will be made public., which can be plug-and-played with alternative image-to-text models and textual knowledge bases. Experimental results show that our TRiG framework outperforms all state-of-the-art supervised methods by at least 11.1 % absolute margin.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.00501
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Vision + language, Visual reasoning
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Gao, Feng100.68
Qing Ping200.68
Govind Thattai300.68
Aishwarya Reganti400.34
Ying Nian Wu51652267.72
Premkumar Natarajan687479.46