Title
Classify and generate: Using classification latent space representations for image generations
Abstract
Utilization of classification latent space information for downstream reconstruction and generation is an intriguing and a relatively unexplored area. In general, discriminative representations are rich in class specific features but are too sparse for reconstruction, whereas, in autoencoders the representations are dense but has limited indistinguishable class specific features, making it less suitable for classification. In this work, we propose a discriminative modelling framework that employs manipulated supervised latent representations to reconstruct and generate new samples belonging to a given class. Unlike generative modelling approaches such as GANs and VAEs that aim to model the data manifold distribution, Representation based Generations (ReGene) directly represents the given data manifold in the classification space. Such supervised representations, under certain constraints, allow for reconstructions and controlled generations using an appropriate decoder without enforcing any prior distribution. Theoretically, given a class, we show that these representations when smartly manipulated using convex combinations retain the same class label. Furthermore, they also lead to novel generation of visually realistic images. Extensive experiments on datasets of varying resolutions demonstrate that ReGene has higher classification accuracy than existing conditional generative models while being competitive in terms of FID. (c) 2021 Elsevier B.V. All rights reserved.
Year
DOI
Venue
2022
10.1016/j.neucom.2021.10.090
NEUROCOMPUTING
Keywords
DocType
Volume
Classification latent space, Convex combination, Image generation
Journal
471
ISSN
Citations 
PageRank 
0925-2312
0
0.34
References 
Authors
0
6