Abstract | ||
---|---|---|
We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm, (DM), previously shown to outperform a number of algorithms for this problem. Unlike many previously proposed solutions for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the reweighting depends on the hypothesis sought. The algorithm is derived from a less conservative notion of discrepancy than the DM algorithm called generalized discrepancy. We present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also give a detailed theoretical analysis of its learning guarantees which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon discrepancy minimization in several tasks. |
Year | Venue | Keywords |
---|---|---|
2019 | Journal of Machine Learning Research | domain adaptation, learning theory |
Field | DocType | Volume |
Learning theory,Domain adaptation,Algorithm,Minification,Artificial intelligence,Convex optimization,Minimization algorithm,Machine learning,Mathematics | Journal | 20 |
Issue | ISSN | Citations |
1 | 1532-4435 | 1 |
PageRank | References | Authors |
0.35 | 23 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Corinna Cortes | 1 | 6574 | 1120.50 |
Mehryar Mohri | 2 | 4502 | 448.21 |
Andres Muñoz Medina | 3 | 9 | 2.84 |