Abstract | ||
---|---|---|
With the emergence of social media, voluminous video clips are uploaded every day, and retrieving the most relevant visual content with a language query becomes critical. Most approaches aim to learn a joint embedding space for plain textual and visual contents without adequately exploiting their intra-modality structures and inter-modality correlations. This paper proposes a novel transformer that explicitly disentangles the text and video into semantic roles of objects, spatial contexts and temporal contexts with an attention scheme to learn the intra- and inter-role correlations among the three roles to discover discriminative features for matching at different levels. The preliminary results on popular YouCook2 indicate that our approach surpasses a current state-of-the-art method, with a high margin in all metrics. It also overpasses two SOTA methods in terms of two metrics. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/ICIP42928.2021.9506267 | ICIP |
DocType | ISSN | Citations |
Conference | IEEE International Conference on Image Processing (ICIP), 2021,
pp. 1334-1338 | 0 |
PageRank | References | Authors |
0.34 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Burak Satar | 1 | 0 | 0.34 |
Hongyuan Zhu | 2 | 109 | 16.59 |
Xavier Bresson | 3 | 0 | 0.68 |
Joo Hwee Lim | 4 | 0 | 0.68 |