Title
Multimodal Learning of Social Image Representation by Exploiting Social Relations
Abstract
Learning the representation for social images has recently made remarkable achievements for many tasks, such as cross-modal retrieval and multilabel classification. However, since social images contain both multimodal contents (e.g., visual images and textual descriptions) and social relations among images, simply modeling the content information may lead to suboptimal embedding. In this paper, we propose a novel multimodal representation learning model for social images, that is, correlational multimodal variational autoencoder (CMVAE) via triplet network. Specifically, in order to mine the highly nonlinear correlation between the visual content and the textual content, a CMVAE is proposed to learn a unified representation for the multiple modalities of social images. Both common information in all modalities and private information in each modality are encoded for the representation learning. To incorporate the social relations among images, we employ the triplet network to embed multiple types of social links in the representation learning. Then, a joint embedding model is proposed to combine the social relations for representation learning of the multimodal contents. Comprehensive experiment results on four datasets confirm the effectiveness of our method in two tasks, namely, multilabel classification and cross-modal retrieval. Our method outperforms the state-of-the-art multimodal representation learning methods with significant improvement of performance.
Year
DOI
Venue
2021
10.1109/TCYB.2019.2896100
IEEE Transactions on Cybernetics
Keywords
DocType
Volume
Algorithms,Animals,Data Mining,Humans,Image Processing, Computer-Assisted,Machine Learning,Semantics,Social Interaction,Social Media
Journal
51
Issue
ISSN
Citations 
3
2168-2267
2
PageRank 
References 
Authors
0.39
23
5
Name
Order
Citations
PageRank
Feiran Huang1508.30
Xiaoming Zhang226335.42
Jie Xu3273.53
Zhonghua Zhao4449.52
Zhoujun Li5964115.99