Abstract | ||
---|---|---|
Common image-text joint understanding techniques presume that images and the associated text can universally be characterized by a single implicit model. However, co-occurring images and text can be related in qualitatively different ways, and explicitly modeling it could improve the performance of current joint understanding models. In this paper, we train a Cross-Modal Coherence Model for text-to-image retrieval task. Our analysis shows that models trained with image–text coherence relations can retrieve images originally paired with target text more often than coherence-agnostic models. We also show via human evaluation that images retrieved by the proposed coherence-aware model are preferred over a coherence-agnostic baseline by a huge margin. Our findings provide insights into the ways that different modalities communicate and the role of coherence relations in capturing commonsense inferences in text and imagery. |
Year | Venue | Keywords |
---|---|---|
2022 | AAAI Conference on Artificial Intelligence | Speech & Natural Language Processing (SNLP),Computer Vision (CV) |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Malihe Alikhani | 1 | 1 | 5.45 |
Fangda Han | 2 | 0 | 0.68 |
Hareesh Ravi | 3 | 0 | 0.34 |
Mubbasir Kapadia | 4 | 546 | 58.07 |
Vladimir Pavlović | 5 | 74 | 8.53 |
Matthew Stone | 6 | 887 | 118.99 |