Abstract | ||
---|---|---|
Meta learning algorithms are effective at obtaining meta models with the capability of solving new tasks quickly. However, they critically require sufficient tasks for meta model training and the resulted model can only solve new tasks similar to the training ones. These limitations make them suffer performance decline in presence of insufficiency of training tasks in target domains and task heterogeneity-the source (model training) tasks presents different characteristics from target (model application) tasks. To overcome these two significant limitations of existing meta learning algorithms, we introduce the cross-domain meta learning framework and propose a new trans-ferable meta learning (TML) algorithm. TML performs meta task adaptation jointly with meta model learning, which effectively narrows divergence between source and target tasks and enables transferring source meta-knowledge to solve target tasks. Thus, the resulted transferable meta model can solve new learning tasks in new domains quickly. We apply the proposed TML to cross-domain few-shot classification problems and evaluate its performance on multiple benchmarks. It performs significantly better and faster than well-established meta learning algorithms and fine-tuned domain-adapted models. |
Year | Venue | Field |
---|---|---|
2018 | UNCERTAINTY IN ARTIFICIAL INTELLIGENCE | Computer science,Task adaptation,Artificial intelligence,Machine learning,Metamodeling |
DocType | Citations | PageRank |
Conference | 1 | 0.35 |
References | Authors | |
0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bingyi Kang | 1 | 138 | 9.24 |
Jiashi Feng | 2 | 2165 | 140.81 |