Abstract | ||
---|---|---|
Multi-task learning attempts to simultaneously leverage data from multiple domains in order to estimate related functions on each domain. For example, a special case of multi-task learning, transfer learning, is often employed when one has a good estimate of a function on a source domain, but is unable to estimate a related function well on a target domain using only target data. Multitask/ transfer learning problems are usually solved by imposing some kind of \"smooth\" relationship among/between tasks. In this paper, we study how different smoothness assumptions on task relations affect the upper bounds of algorithms proposed for these problems under different settings. For general multi-task learning, we study a family of algorithms which utilize a reweighting matrix on task weights to capture the smooth relationship among tasks, which has many instantiations in existing literature. Furthermore, for multi-task learning in a transfer learning framework, we study the recently proposed algorithms for the \"model shift\", where the conditional distribution P(Y|X) is allowed to change across tasks but the change is assumed to be smooth. In addition, we illustrate our results with experiments on both simulated and real data. |
Year | Venue | Field |
---|---|---|
2016 | IJCAI | Online machine learning,Multi-task learning,Semi-supervised learning,Stability (learning theory),Instance-based learning,Computer science,Empirical risk minimization,Transfer of learning,Unsupervised learning,Artificial intelligence,Machine learning |
DocType | Citations | PageRank |
Conference | 2 | 0.38 |
References | Authors | |
11 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xuezhi Wang | 1 | 50 | 5.24 |
Junier B. Oliva | 2 | 38 | 10.18 |
Jeff G. Schneider | 3 | 1616 | 165.43 |
Barnabás Póczos | 4 | 819 | 76.53 |