Title
Taking A Hint: Leveraging Explanations To Make Vision And Language Models More Grounded
Abstract
Many vision and language models suffer from poor visual grounding - often falling back on easy-to-learn language priors rather than basing their decisions on visual concepts in the image. In this work, we propose a generic approach called Human Importance-aware Network Tuning (HINT) that effectively leverages human demonstrations to improve visual grounding. HINT encourages deep networks to be sensitive to the same input regions as humans. Our approach optimizes the alignment between human attention maps and gradient-based network importances - ensuring that models learn not just to look at but rather rely on visual concepts that humans found relevant for a task when making predictions. We apply HINT to Visual Question Answering and Image Captioning tasks, outperforming top approaches on splits that penalize over-reliance on language priors (VQA-CP and robust captioning) using human attention demonstrations for just 6% of the training data.
Year
DOI
Venue
2019
10.1109/ICCV.2019.00268
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019)
Field
DocType
Volume
Computer vision,Cognitive science,Computer science,Artificial intelligence,Language model
Conference
2019
Issue
ISSN
Citations 
1
1550-5499
9
PageRank 
References 
Authors
1.17
2
8
Name
Order
Citations
PageRank
Ram Prasaath138114.58
Stefan Lee223119.88
Yilin Shen326334.18
Hongxia Jin463367.53
Shalini Ghosh592.19
Larry P. Heck61096100.58
Dhruv Batra72142104.81
Devi Parikh82929132.01