Abstract | ||
---|---|---|
Cross-modal retrieval methods have been significantly improved in last years with the use of deep neural networks and large-scale annotated datasets such as ImageNet and Places. However, collecting and annotating such datasets requires a tremendous amount of human effort and, besides, their annotations are limited to discrete sets of popular visual classes that may not be representative of the richer semantics found on large-scale cross-modal retrieval datasets. In this paper, we present a self-supervised cross-modal retrieval framework that leverages as training data the correlations between images and text on the entire set of Wikipedia articles. Our method consists in training a CNN to predict: (1) the semantic context of the article in which an image is more probable to appear as an illustration, and (2) the semantic context of its caption. Our experiments demonstrate that the proposed method is not only capable of learning discriminative visual representations for solving vision tasks like classification, but that the learned representations are better for cross-modal retrieval when compared to supervised pre-training of the network on the ImageNet dataset.
|
Year | DOI | Venue |
---|---|---|
2019 | 10.1145/3323873.3325035 | ICMR '19: International Conference on Multimedia Retrieval
Ottawa ON
Canada
June, 2019 |
Keywords | Field | DocType |
Self-Supervised Learning, Visual Representations, Cross-Modal Retrieval | Training set,Network on,Object detection,Computer science,Semantic context,Artificial intelligence,Contextual image classification,Discriminative model,Semantics,Modal,Machine learning | Journal |
Volume | ISBN | Citations |
abs/1902.00378 | 978-1-4503-6765-3 | 1 |
PageRank | References | Authors |
0.35 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yash Patel | 1 | 9 | 4.28 |
Lluís Gómez | 2 | 5 | 0.73 |
Marçal Rusiñol | 3 | 386 | 33.57 |
Dimosthenis Karatzas | 4 | 406 | 38.13 |
C. V. Jawahar | 5 | 1700 | 148.58 |