Abstract | ||
---|---|---|
Deep neural networks are known to suffer from catastrophic forgetting in class-incremental learning, where the performance on previous tasks drastically degrades when learning a new task. To alleviate this effect, we propose to leverage a continuous and large stream of unlabeled data in the wild. In particular, to leverage such transient external data effectively, we design a novel class-incremental learning scheme with (a) a new distillation loss, termed global distillation, (b) a learning strategy to avoid overfitting to the most recent task, and (c) a sampling strategy for the desired external data. Our experimental results on various datasets, including CIFAR and ImageNet, demonstrate the superiority of the proposed methods over prior methods, particularly when a stream of unlabeled data is accessible: we achieve up to 9.3% of relative performance improvement compared to the state-of-the-art method. |
Year | Venue | Field |
---|---|---|
2019 | CVPR Workshops | Forgetting,Computer science,Incremental learning,Distillation,Artificial intelligence,Sampling (statistics),Overfitting,Deep neural networks,Machine learning,Performance improvement |
DocType | Volume | Citations |
Journal | abs/1903.12648 | 1 |
PageRank | References | Authors |
0.35 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kibok Lee | 1 | 68 | 5.16 |
Kimin Lee | 2 | 51 | 11.57 |
Jinwoo Shin | 3 | 513 | 56.35 |
Honglak Lee | 4 | 6247 | 398.39 |