Title
DualPrompt: Complementary Prompting for Rehearsal-Free Continual Learning.
Abstract
Continual learning aims to enable a single model to learn a sequence of tasks without catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store past pristine examples for experience replay, which, however, limits their practical value due to privacy and memory constraints. In this work, we present a simple yet effective framework, DualPrompt, which learns a tiny set of parameters, called prompts, to properly instruct a pre-trained model to learn tasks arriving sequentially without buffering past examples. DualPrompt presents a novel approach to attach complementary prompts to the pre-trained backbone, and then formulates the objective as learning task-invariant and task-specific “instructions”. With extensive experimental validation, DualPrompt consistently sets state-of-the-art performance under the challenging class-incremental setting. In particular, DualPrompt outperforms recent advanced continual learning methods with relatively large buffer sizes. We also introduce a more challenging benchmark, Split ImageNet-R, to help generalize rehearsal-free continual learning research. Source code is available at https://github.com/google-research/l2p.
Year
DOI
Venue
2022
10.1007/978-3-031-19809-0_36
European Conference on Computer Vision
Keywords
DocType
Citations 
Continual learning,Rehearsal-free,Prompt-based learning
Conference
0
PageRank 
References 
Authors
0.34
0
11
Name
Order
Citations
PageRank
Zifeng Wang163.50
Zizhao Zhang214017.42
Sayna Ebrahimi300.34
Ruoxi Sun400.68
Han Zhang524315.29
Chen-Yu Lee600.34
Xiaoqi Ren700.68
Guolong Su801.01
Vincent Perot901.01
Jennifer G. Dy101567126.18
Tomas Pfister1143121.52