Title
Cross-Modal Coherence for Text-to-Image Retrieval.
Abstract
Common image-text joint understanding techniques presume that images and the associated text can universally be characterized by a single implicit model. However, co-occurring images and text can be related in qualitatively different ways, and explicitly modeling it could improve the performance of current joint understanding models. In this paper, we train a Cross-Modal Coherence Model for text-to-image retrieval task. Our analysis shows that models trained with image–text coherence relations can retrieve images originally paired with target text more often than coherence-agnostic models. We also show via human evaluation that images retrieved by the proposed coherence-aware model are preferred over a coherence-agnostic baseline by a huge margin. Our findings provide insights into the ways that different modalities communicate and the role of coherence relations in capturing commonsense inferences in text and imagery.
Year
Venue
Keywords
2022
AAAI Conference on Artificial Intelligence
Speech & Natural Language Processing (SNLP),Computer Vision (CV)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Malihe Alikhani115.45
Fangda Han200.68
Hareesh Ravi300.34
Mubbasir Kapadia454658.07
Vladimir Pavlović5748.53
Matthew Stone6887118.99