Abstract | ||
---|---|---|
We present a new algorithm for domain adaptation improving upon the discrepancy minimization algorithm (DM), which was previously shown to outperform a number of popular algorithms designed for this task. Unlike most previous approaches adopted for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, it uses a reweighting that depends on the hypothesis considered and is based on the minimization of a new measure of generalized discrepancy. We give a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also present a detailed theoretical analysis of its learning guarantees, which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon the DM algorithm in several tasks. |
Year | DOI | Venue |
---|---|---|
2014 | 10.1145/2783258.2783368 | Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining |
Keywords | Field | DocType |
learning theory | Mathematical optimization,Domain adaptation,Learning theory,Computer science,Algorithm,Minification,Artificial intelligence,Convex optimization,Minimization algorithm,Machine learning | Journal |
ISBN | Citations | PageRank |
978-1-4503-3664-2 | 5 | 0.40 |
References | Authors | |
30 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Corinna Cortes | 1 | 6574 | 1120.50 |
Mehryar Mohri | 2 | 4502 | 448.21 |
Andres Muñoz Medina | 3 | 69 | 3.44 |