Abstract | ||
---|---|---|
The purpose of generalized zero-shot classification (GZSC) is to classify the test samples no matter whether training classes (seen classes) or new classes (unseen classes) they are from. However, there are no labeled samples from new classes in the training process. Thus, selecting some unseen samples and labeling them are significant for GZSC. We present two novel ideas for GZSC: (1) splitting target samples into seen and unseen ones; (2) improving the GZSC performance with less manual annotations of the unseen samples. We propose an active unseen sample selection framework for GZSC tasks (AUSS). Specifically, a two-stage coarse-to-fine-grained selection method is first used to split target samples into seen and unseen ones. The selected unseen samples can be divided into high-confidence and informative ones. Unlike traditional active learning methods focusing on only the informative samples, we especially focus on the large number of high-confidence unseen samples. The high-confidence unseen samples are assigned with pseudo labels which do not need to be manually labeled. We select as few as possible informative unseen samples for manually labeling. Thanks to these high-confidence unseen samples and informative unseen samples, we do not need to train generative models for generating virtual unseen samples. Experiments on widely-adopted GZSC benchmarks demonstrate the advantages of AUSS over existing methods. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1007/s13042-022-01509-7 | International Journal of Machine Learning and Cybernetics |
Keywords | DocType | Volume |
Generalized zero-shot classification, Active learning, Informative unseen samples, High-confidence unseen samples | Journal | 13 |
Issue | ISSN | Citations |
8 | 1868-8071 | 0 |
PageRank | References | Authors |
0.34 | 7 | 3 |