Title
Incremental Zero-Shot Learning
Abstract
The goal of zero-shot learning (ZSL) is to recognize objects from unseen classes correctly without corresponding training samples. The existing ZSL methods are trained on a set of predefined classes and do not have the ability to learn from a stream of training data. However, in many real-world applications, training data are collected incrementally; this is one of the main reasons why ZSL methods cannot be applied to certain real-world situations. Accordingly, in order to handle practical learning tasks of this kind, we introduce a novel ZSL setting, referred to as incremental ZSL (IZSL), the goal of which is to accumulate historical knowledge and alleviate Catastrophic Forgetting to facilitate better recognition when incrementally trained on new classes. We further propose a novel method to realize IZSL, which employs a generative replay strategy to produce virtual samples of previously seen classes. The historical knowledge is then transferred from the former learning step to the current step through joint training on both real new and virtual old data. Subsequently, a knowledge distillation strategy is leveraged to distill the knowledge from the former model to the current model, which regularizes the training process of the current model. In addition, our method can be flexibly equipped with the most generative-ZSL methods to tackle IZSL. Extensive experiments on three challenging benchmarks indicate that the proposed method can effectively tackle the IZSL problem effectively, while the existing ZSL methods fail.
Year
DOI
Venue
2022
10.1109/TCYB.2021.3110369
IEEE Transactions on Cybernetics
Keywords
DocType
Volume
Generative replay,incremental learning,knowledge distillation,transfer knowledge,zero-shot learning (ZSL)
Journal
52
Issue
ISSN
Citations 
12
2168-2267
2
PageRank 
References 
Authors
0.36
9
4
Name
Order
Citations
PageRank
Kun Wei1124.55
Cheng Deng2128385.48
Xu Yang3458.16
Dacheng Tao419032747.78