Abstract | ||
---|---|---|
Continual learning aims to enable a single model to learn a sequence of tasks without catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store past pristine examples for experience replay, which, however, limits their practical value due to privacy and memory constraints. In this work, we present a simple yet effective framework, DualPrompt, which learns a tiny set of parameters, called prompts, to properly instruct a pre-trained model to learn tasks arriving sequentially without buffering past examples. DualPrompt presents a novel approach to attach complementary prompts to the pre-trained backbone, and then formulates the objective as learning task-invariant and task-specific “instructions”. With extensive experimental validation, DualPrompt consistently sets state-of-the-art performance under the challenging class-incremental setting. In particular, DualPrompt outperforms recent advanced continual learning methods with relatively large buffer sizes. We also introduce a more challenging benchmark, Split ImageNet-R, to help generalize rehearsal-free continual learning research. Source code is available at https://github.com/google-research/l2p. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1007/978-3-031-19809-0_36 | European Conference on Computer Vision |
Keywords | DocType | Citations |
Continual learning,Rehearsal-free,Prompt-based learning | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 11 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zifeng Wang | 1 | 6 | 3.50 |
Zizhao Zhang | 2 | 140 | 17.42 |
Sayna Ebrahimi | 3 | 0 | 0.34 |
Ruoxi Sun | 4 | 0 | 0.68 |
Han Zhang | 5 | 243 | 15.29 |
Chen-Yu Lee | 6 | 0 | 0.34 |
Xiaoqi Ren | 7 | 0 | 0.68 |
Guolong Su | 8 | 0 | 1.01 |
Vincent Perot | 9 | 0 | 1.01 |
Jennifer G. Dy | 10 | 1567 | 126.18 |
Tomas Pfister | 11 | 431 | 21.52 |