Abstract | ||
---|---|---|
Component analysis (CA) is a powerful technique for learning discriminative representations in various computer vision tasks. Typical CA methods are essentially based on the covariance matrix of training data. But, the covariance matrix has obvious disadvantages such as failing to model complex relationship among features and singularity in small sample size cases. In this letter, we propose a general covariance measure to achieve better data representations. The proposed covariance is characterized by a nonlinear mapping determined by domain-specific applications, thus leading to more advantages, flexibility, and applicability in practice. With general covariance, we further present two novel CA methods for learning compact representations and discuss their differences from conventional methods. A series of experimental results on nine benchmark data sets demonstrate the effectiveness of the proposed methods in terms of accuracy. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/LSP.2020.3044026 | IEEE Signal Processing Letters |
Keywords | DocType | Volume |
Component analysis,dimension reduction,representation learning | Journal | 28 |
Issue | ISSN | Citations |
99 | 1070-9908 | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yunhao Yuan | 1 | 19 | 4.64 |
Jin Li | 2 | 1 | 1.03 |
Yun Li | 3 | 10 | 2.56 |
Jianping Gou | 4 | 116 | 24.01 |
Jipeng Qiang | 5 | 42 | 13.63 |