Title
Zero-Vae-Gan: Generating Unseen Features For Generalized And Transductive Zero-Shot Learning
Abstract
Zero-shot learning (ZSL) is a challenging task due to the lack of unseen class data during training. Existing works attempt to establish a mapping between the visual and class spaces through a common intermediate semantic space. The main limitation of existing methods is the strong bias towards seen class, known as the domain shift problem, which leads to unsatisfactory performance in both conventional and generalized ZSL tasks. To tackle this challenge, we propose to convert ZSL to the conventional supervised learning by generating features for unseen classes. To this end, a joint generative model that couples variational autoencoder (VAE) and generative adversarial network (GAN), called Zero-VAE-GAN, is proposed to generate high-quality unseen features. To enhance the class-level discriminability, an adversarial categorization network is incorporated into the joint framework. Besides, we propose two self-training strategies to augment unlabeled unseen features for the transductive extension of our model, addressing the domain shift problem to a large extent. Experimental results on five standard benchmarks and a large-scale dataset demonstrate the superiority of our generative model over the state-of-the-art methods for conventional, especially generalized ZSL tasks. Moreover, the further improvement of the transductive setting demonstrates the effectiveness of the proposed self-training strategies.
Year
DOI
Venue
2020
10.1109/TIP.2020.2964429
IEEE TRANSACTIONS ON IMAGE PROCESSING
Keywords
DocType
Volume
Zero-shot learning, generative model, self-training
Journal
29
Issue
ISSN
Citations 
1
1057-7149
10
PageRank 
References 
Authors
0.49
32
8
Name
Order
Citations
PageRank
Rui Gao1236.42
Xingsong Hou217618.24
Jie Qin316717.38
Jiaxin Chen41168.41
Li Liu5126461.72
Fan Zhu649229.38
Zhao Zhang793865.99
Ling Shao85424249.92