Title
Learning Unsupervised and Supervised Representations via General Covariance
Abstract
Component analysis (CA) is a powerful technique for learning discriminative representations in various computer vision tasks. Typical CA methods are essentially based on the covariance matrix of training data. But, the covariance matrix has obvious disadvantages such as failing to model complex relationship among features and singularity in small sample size cases. In this letter, we propose a general covariance measure to achieve better data representations. The proposed covariance is characterized by a nonlinear mapping determined by domain-specific applications, thus leading to more advantages, flexibility, and applicability in practice. With general covariance, we further present two novel CA methods for learning compact representations and discuss their differences from conventional methods. A series of experimental results on nine benchmark data sets demonstrate the effectiveness of the proposed methods in terms of accuracy.
Year
DOI
Venue
2021
10.1109/LSP.2020.3044026
IEEE Signal Processing Letters
Keywords
DocType
Volume
Component analysis,dimension reduction,representation learning
Journal
28
Issue
ISSN
Citations 
99
1070-9908
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Yunhao Yuan1194.64
Jin Li211.03
Yun Li3102.56
Jianping Gou411624.01
Jipeng Qiang54213.63