Title | ||
---|---|---|
Deep Multigraph Hierarchical Enhanced Semantic Representation for Cross-Modal Retrieval |
Abstract | ||
---|---|---|
The main challenge of cross-modal retrieval is how to efficiently realize cross-modal semantic alignment and reduce the heterogeneity gap. However, existing approaches either ignore the multigrained semantic knowledge learning from different modalities, or fail to learn consistent relation distributions of semantic details in multimodal instances. To this end, this article proposes a novel end-to-end cross-modal representation method, termed as deep multigraph-based hierarchical enhanced semantic representation (MG-HESR). This method is an integration of MG-HESR with cross-modal adversarial learning, which captures multigrained semantic knowledge from cross-modal samples and realizes fine-grained semantic relation distribution alignment, and then generates modalities-invariant representations in a common subspace. To evaluate the performance, extensive experiments are conducted on four benchmarks. The experimental results show that our method is superior than the state-of-the-art methods. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/MMUL.2022.3144138 | IEEE MultiMedia |
Keywords | DocType | Volume |
Semantics, Adversarial machine learning, Correlation, Visualization, Generators, Generative adversarial networks, Computer science | Journal | 29 |
Issue | ISSN | Citations |
3 | 1070-986X | 0 |
PageRank | References | Authors |
0.34 | 11 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Lei Zhu | 1 | 854 | 51.69 |
Chengyuan Zhang | 2 | 0 | 0.34 |
Jiayu Song | 3 | 0 | 0.34 |
Shichao Zhang | 4 | 2777 | 164.25 |
Chunwei Tian | 5 | 0 | 0.34 |
Xinghui Zhu | 6 | 0 | 1.01 |