Title
Learning Transferrable and Interpretable Representations for Domain Generalization
Abstract
ABSTRACTConventional machine learning models are often vulnerable to samples with different distributions from the ones of training samples, which is known as domain shift. Domain Generalization (DG) challenges this issue by training a model based on multiple source domains and generalizing it to arbitrary unseen target domains. In spite of remarkable results made in DG, a majority of existing works lack a deep understanding of the feature representations learned in DG models, resulting in limited generalization ability when facing domainsout-of-distribution. In this paper, we aim to learn a domain transformation space via a domain transformer network (DTN) which explicitly mines the relationship among multiple domains and constructs transferable feature representations for down-stream tasks by interpreting each feature as a semantically weighted combination of multiple domain-specific features. Our DTN is encouraged to meta-learn the properties and characteristics of domains during the training process based on multiple seen domains, making transformed feature representations more semantical, thus generalizing better to unseen domains. Once the model is constructed, the feature representations of unseen target domains can also be inferred adaptively by selectively combining the feature representations from the diverse set of seen domains. We conduct extensive experiments on five DG benchmarks and the results strongly demonstrate the effectiveness of our approach.
Year
DOI
Venue
2021
10.1145/3474085.3475488
International Multimedia Conference
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Zhekai Du133.10
Jingjing Li259744.26
Ke Lu3644.71
Lei Zhu485451.69
Zi Huang500.34