Title
JDMAN: Joint Discriminative and Mutual Adaptation Networks for Cross-Domain Facial Expression Recognition
Abstract
ABSTRACTCross-domain Facial Expression Recognition (FER) is challenging due to the difficulty of concurrently handling the domain shift and semantic gap during domain adaptation. Existing methods mainly focus on reducing the domain discrepancy for transferable features but fail to decrease the semantic one, which may result in negative transfer. To this end, we propose Joint Discriminative and Mutual Adaptation Networks (JDMAN), which collaboratively bridge the domain shift and semantic gap by domain- and category-level co-adaptation based on mutual information and discriminative metric learning techniques. Specifically, we design a mutual information minimization module for domain-level adaptation, which narrows the domain shift by simultaneously distilling the domain-invariant components and eliminating the untransferable ones lying in different domains. Moreover, we propose a semantic metric learning module for category-level adaptation, which can close the semantic discrepancy during discriminative intra-domain representation learning and transferable inter-domain knowledge discovery. These two modules are jointly leveraged in our JDMAN to safely transfer the source knowledge to target data in an end-to-end manner. Extensive experimental results on six databases show that our method achieves state-of-the-art performance. The code of our JDMAN is available at https://github.com/YingjianLi/JDMAN.
Year
DOI
Venue
2021
10.1145/3474085.3475484
International Multimedia Conference
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Yingjian Li151.40
Yingnan Gao220.70
Bingzhi Chen341.75
Zheng Zhang454940.45
Lei Zhu585451.69
Lu Guangming678662.39