Abstract | ||
---|---|---|
Auto-Encoders (AE) play an important role in feature extraction, fusion and representation learning. Particularly, in case of medical datasets, where labeled data is often scarce, they perform better than transfer-learning approaches. Many auto-encoding methods have been proposed, however, present methods either preserve the latent space (with lower predictive power) or provide good predictive performance (with distorted latent space). This work presents a Discriminative Auto-Encoding (DiscAE) approach which provides better representations with decent reconstructions. Clustering constraint is imposed through a discriminator on the latent space of a simple auto-encoder to enforce Gaussian distribution of the latent space. The network is trained by alternating between decoder and the discriminator. Furthermore, the role of noise is explored in improving the reconstructions and regularization of the latent space. Unlike Variational Auto-Encoders (VAE), DiscAE provides better performance by utilizing the labeled data. The working of DiscAE under a semi-supervised setting is also discussed. To support our claims, extensive experiments and ablations were carried out on three benchmark datasets. It was found that DiscAE outperforms the existing auto-encoding approaches on predictive tasks while maintaining the quality of reconstructions. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/LSP.2021.3077853 | IEEE SIGNAL PROCESSING LETTERS |
Keywords | DocType | Volume |
Training, Task analysis, Decoding, Mathematical model, Signal to noise ratio, Image reconstruction, Estimation, Auto-encoding, Discriminator, Embedding, Mult, iclass classification, Representation learning | Journal | 28 |
ISSN | Citations | PageRank |
1070-9908 | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Vipul Bansal | 1 | 0 | 1.01 |
Himanshu Buckchash | 2 | 0 | 2.03 |
Balasubramanian Raman | 3 | 679 | 70.23 |