Title
Generalization Bounds for Transfer Learning under Model Shift.
Abstract
Transfer learning (sometimes also referred to as domain-adaptation) algorithms are often used when one tries to apply a model learned from a fully labeled source domain, to an unlabeled target domain, that is similar but not identical to the source. Previous work on covariate shift focuses on matching the marginal distributions on observations X across domains while assuming the conditional distribution P(Y|X) stays the same. Relevant theory focusing on covariate shift has also been developed. Recent work on transfer learning under model shift deals with different conditional distributions P(Y|X) across domains with a few target labels, while assuming the changes are smooth. However, no analysis has been provided to say when these algorithms work. In this paper, we analyze transfer learning algorithms under the model shift assumption. Our analysis shows that when the conditional distribution changes, we are able to obtain a generalization error bound of O(1/λ*√nl) with respect to the labeled target sample size nl, modified by the smoothness of the change (λ*) across domains. Our analysis also sheds light on conditions when transfer learning works better than no-transfer learning (learning by labeled target data only). Furthermore, we extend the transfer learning algorithm from a single source to multiple sources.
Year
Venue
Field
2015
UAI
Covariate shift,Conditional probability distribution,Transfer of learning,Artificial intelligence,Generalization error,Smoothness,Machine learning,Sample size determination,Mathematics,Marginal distribution
DocType
Citations 
PageRank 
Conference
1
0.35
References 
Authors
13
2
Name
Order
Citations
PageRank
Xuezhi Wang1505.24
Jeff G. Schneider21616165.43