Title
Model Adaptation through Hypothesis Transfer with Gradual Knowledge Distillation
Abstract
The ability to adapt their perception to changing environments is a core characterization of intelligent robots. At present, Unsupervised Domain Adaptation (UDA) methods are used to address this problem where the adaptation task is formulated as a transfer problem from a well-described scenario (source domain) to a new scenario (target domain). In order to implement the domain adaptation, these methods require access to the source data for achieving the distribution matching between both domains. However, in many real-world applications, the source data is inaccessible and only a source model pre-trained on the source domain is available during the transfer process. Therefore, the traditional UDA methods cannot support the challenging setting. This paper developed a new hypothesis transfer method to achieve model adaptation with gradual knowledge distillation. Specifically, we first prepare a source model through training a deep network on the labeled source domain by supervised learning. Then, we transfer the source model to the unlabeled target domain by self-training. To implement gradual knowledge distillation, we sliced the self-training into several epochs and then used the soft pseudo-labels from the latest epoch to guide the current epoch. In this process, the soft labels were generated by a semantic fusion on a proposed geometry of the neighborhood. To regulate the self-training, we developed a new objective constructed on the neighborhood. Experiments on three benchmarks have confirmed the state-of-the-art results of our method.
Year
DOI
Venue
2021
10.1109/IROS51168.2021.9636206
2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)
DocType
ISSN
Citations 
Conference
2153-0858
0
PageRank 
References 
Authors
0.34
0
7
Name
Order
Citations
PageRank
Song Tang1172.67
Yuji Shi200.34
Zhiyuan Ma3279.26
Jian Li400.68
Jianzhi Lyu500.68
Qingdu Li616026.78
Jianwei Zhang79031.35