Abstract | ||
---|---|---|
Feature selection has been an effective way to reduce the dimensionality of the high dimensional data. In this paper, we propose a novel feature selection method which achieves batch feature selection using both supervised and unsupervised data samples. The objective function includes three parts: first, under the assumption that each data sample has been assigned a class label, the ratio of between class scatter matrix and total scatter matrix should be minimized, where the scatter matrices are formed by the selected features of these data samples; second, we use linear regression to model the correlations between the data samples with supervision information and their class labels; last, we use l2,1-norm to guarantee the sparsity of the feature selection matrix and exploit the sharing information between supervised and unsupervised data samples jointly. Different from existing methods, our approach exploits local discriminative information to construct the model, therefore we obtain better results from extensive experiments compared with the existing methods. |
Year | DOI | Venue |
---|---|---|
2016 | 10.1016/j.neucom.2015.05.119 | Neurocomputing |
Keywords | Field | DocType |
Feature selection,Semi-supervised learning,Local discriminative,Information,l2,1 norm,Linear regression | Data mining,Semi-supervised learning,Feature selection,Matrix (mathematics),Artificial intelligence,Discriminative model,Scatter matrix,Clustering high-dimensional data,Sample (statistics),Pattern recognition,Curse of dimensionality,Mathematics,Machine learning | Journal |
Volume | Issue | ISSN |
173 | P1 | 0925-2312 |
Citations | PageRank | References |
17 | 0.60 | 25 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhiqiang Zeng | 1 | 139 | 16.35 |
Xiaodong Wang | 2 | 35 | 5.19 |
Jian Zhang | 3 | 1305 | 100.05 |
Qun Wu | 4 | 22 | 3.48 |