Title
Domain adaptation in regression
Abstract
This paper presents a series of new results for domain adaptation in the regression setting. We prove that the discrepancy is a distance for the squared loss when the hypothesis set is the reproducing kernel Hilbert space induced by a universal kernel such as the Gaussian kernel. We give new pointwise loss guarantees based on the discrepancy of the empirical source and target distributions for the general class of kernel-based regularization algorithms. These bounds have a simpler form than previous results and hold for a broader class of convex loss functions not necessarily differentiable, including Lq losses and the hinge loss. We extend the discrepancy minimization adaptation algorithm to the more significant case where kernels are used and show that the problem can be cast as an SDP similar to the one in the feature space. We also show that techniques from smooth optimization can be used to derive an efficient algorithm for solving such SDPs even for very high-dimensional feature spaces. We have implemented this algorithm and report the results of experiments demonstrating its benefits for adaptation and show that, unlike previous algorithms, it can scale to large data sets of tens of thousands or more points.
Year
DOI
Venue
2011
10.1007/978-3-642-24412-4_25
ALT
Keywords
Field
DocType
discrepancy minimization adaptation algorithm,gaussian kernel,kernel-based regularization algorithm,new pointwise loss,reproducing kernel hilbert space,previous algorithm,domain adaptation,efficient algorithm,convex loss function,lq loss
Hinge loss,Computer science,Artificial intelligence,Gaussian function,Pointwise,Kernel (linear algebra),Discrete mathematics,Kernel embedding of distributions,Support vector machine,Algorithm,Variable kernel density estimation,Machine learning,Reproducing kernel Hilbert space
Conference
Volume
ISSN
Citations 
6925
0302-9743
14
PageRank 
References 
Authors
0.79
13
2
Name
Order
Citations
PageRank
Corinna Cortes165741120.50
Mehryar Mohri24502448.21