Title
Combining Multiple Cues for Visual Madlibs Question Answering
Abstract
This paper presents an approach for answering fill-in-the-blank multiple choice questions from the Visual Madlibs dataset. Instead of generic and commonly used representations trained on the ImageNet classification task, our approach employs a combination of networks trained for specialized tasks such as scene recognition, person activity classification, and attribute prediction. We also present a method for localizing phrases from candidate answers in order to provide spatial support for feature extraction. We map each of these features, together with candidate answers, to a joint embedding space through normalized canonical correlation analysis (nCCA). Finally, we solve an optimization problem to learn to combine scores from nCCA models trained on multiple cues to select the best answer. Extensive experimental results show a significant improvement over the previous state of the art and confirm that answering questions from a wide range of types benefits from examining a variety of image cues and carefully choosing the spatial support for feature extraction.
Year
DOI
Venue
2019
10.1007/s11263-018-1096-0
International Journal of Computer Vision
Keywords
Field
DocType
Visual question answering,Cue integration,Region phrase correspondence,Computer vision,Language
Activity classification,Embedding,Normalization (statistics),Question answering,Computer science,Canonical correlation,Feature extraction,Artificial intelligence,Optimization problem,Machine learning,Multiple choice
Journal
Volume
Issue
ISSN
127
1
1573-1405
Citations 
PageRank 
References 
0
0.34
36
Authors
6
Name
Order
Citations
PageRank
Tatiana Tommasi150229.31
Arun Mallya21029.33
Bryan A. Plummer3768.15
Svetlana Lazebnik47379449.66
Alexander C. Berg510554630.24
Tamara L. Berg63221225.32