Abstract | ||
---|---|---|
Recent approaches to unsupervised domain adaptation focus on transferring knowledge from source (labeled) data to target (unlabeled) data. Both data types share the same class space but originate from different domains. One way to achieve this transfer is tasking two classifiers to detect target features that diverge from source features. Meanwhile, a feature generator is adversarially refined to match the detected features with the source ones. However, the aligned features are still subject to ambiguity. Samples that are not smoothly distributed on the latent manifold are often missed in training. Moreover, target data may not be sufficient for adversarial learning. To overcome these problems, our proposed Adversarially Smoothed Feature Alignment (AdvSFA) model is designed to identify ambiguous target inputs by maximizing classifiers discrepancy in an extended class space. This enables the generator to receive valuable feedback from the classifiers and consequently learn more discriminative and smooth representation. Imposing smoothness on the latent manifold is a desirable property to improve model generalization and avoid having neighboring samples of different classes. To further promote such property, we not only task the generator to conduct feature alignment on target examples and but also in-between them. By adopting these constraints, our method shows a remarkable improvement across different adaptation tasks using two benchmark datasets. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/IJCNN52387.2021.9533492 | 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) |
DocType | ISSN | Citations |
Conference | 2161-4393 | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Mohamed Said Mahmoud Azzam | 1 | 1 | 1.70 |
Si Wu | 2 | 17 | 7.03 |
Yang Zhang | 3 | 0 | 0.34 |
Aurele Tohokantche Gnanha | 4 | 0 | 0.68 |
Hau-San Wong | 5 | 1008 | 86.89 |