Title
Semantic Compression Embedding for Generative Zero-Shot Learning.
Abstract
Generative methods have been successfully applied in zero-shot learning (ZSL) by learning an implicit mapping to alleviate the visual-semantic domain gaps and synthesizing unseen samples to handle the data imbalance between seen and unseen classes. However, existing generative methods simply use visual features extracted by the pre-trained CNN backbone. These visual features lack attribute-level semantic information. Consequently, seen classes are indistinguishable, and the knowledge transfer from seen to unseen classes is limited. To tackle this issue, we propose a novel Semantic Compression Embedding Guided Generation (SC-EGG) model, which cascades a semantic compression embedding network (SCEN) and an embedding guided generative network (EGGN). The SCEN extracts a group of attribute-level local features for each sample and further compresses them into the new low-dimension visual feature. Thus, a dense-semantic visual space is obtained. The EGGN learns a mapping from the class-level semantic space to the dense-semantic visual space, thus improving the discriminability of the synthesized dense-semantic unseen visual features. Extensive experiments on three benchmark datasets, i.e., CUB, SUN and AWA2, demonstrate the significant performance gains of SC-EGG over current state-of-the-art methods and its baselines.
Year
DOI
Venue
2022
10.24963/ijcai.2022/134
European Conference on Artificial Intelligence
Keywords
DocType
Citations 
Computer Vision: Transfer, low-shot, semi- and un- supervised learning,Computer Vision: Recognition (object detection, categorization)
Conference
0
PageRank 
References 
Authors
0.34
0
8
Name
Order
Citations
PageRank
Ziming Hong101.35
Shiming Chen200.34
Yong Xu333931.64
Wenhan Yang433935.40
Jian Zhao500.34
Yuanjie Shao600.68
Qinmu Peng713115.78
Xinge You8144183.00