Title
FedCMR: Federated Cross-Modal Retrieval
Abstract
ABSTRACTDeep cross-modal retrieval methods have shown their competitiveness among different cross-modal retrieval algorithms. Generally, these methods require a large amount of training data. However, aggregating large amounts of data will incur huge privacy risks and high maintenance costs. Inspired by the recent success of federated learning, we propose the federated cross-modal retrieval (FedCMR), which learns the model with decentralized multi-modal data. Specifically, we first train the cross-modal retrieval model and learn the common space across multiple modalities in each client using its local data. Then, we jointly learn the common subspace of multiple clients on the trusted central server. Finally, each client updates the common subspace of the local model based on the aggregated common subspace on the server, so that all clients participated in the training can benefit from federated learning. Experiment results on four benchmark datasets demonstrate the effectiveness proposed method.
Year
DOI
Venue
2021
10.1145/3404835.3462989
Research and Development in Information Retrieval
Keywords
DocType
Citations 
Cross-modal retrieval, multi-modal learning, federated learning
Conference
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Linlin Zong1195.34
Qiujie Xie200.34
Jiahui Zhou300.34
Peiran Wu400.34
Xianchao Zhang531339.57
Bo Xu644.77