Abstract | ||
---|---|---|
In many real-world applications, data are represented by high-dimensional features. Despite the simplicity, existing K-means subspace clustering algorithms often employ eigenvalue decomposition to generate an approximate solution, which makes the model less efficiency. Besides, their loss functions are either sensitive to outliers or small loss errors. In this paper, we propose a fast adaptive K-means (FAKM) type subspace clustering model, where an adaptive loss function is designed to provide a flexible cluster indicator calculation mechanism, thereby suitable for datasets under different distributions. To find the optimal feature subset, FAKM performs clustering and feature selection simultaneously without the eigenvalue decomposition, therefore efficient for real-world applications. We exploit an efficient alternative optimization algorithm to solve the proposed model, together with theoretical analyses on its convergence and computational complexity. Finally, extensive experiments on several benchmark datasets demonstrate the advantages of FAKM compared to state-of-the-art clustering algorithms. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/ACCESS.2019.2907043 | IEEE ACCESS |
Keywords | Field | DocType |
Dimension reduction,feature selection,K-means,discriminative embedded clustering,adaptive learning | Convergence (routing),k-means clustering,Clustering high-dimensional data,Feature selection,Computer science,Outlier,Algorithm,Eigendecomposition of a matrix,Cluster analysis,Computational complexity theory,Distributed computing | Journal |
Volume | ISSN | Citations |
7 | 2169-3536 | 1 |
PageRank | References | Authors |
0.36 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xiaodong Wang | 1 | 35 | 5.19 |
Rung-Ching Chen | 2 | 331 | 37.37 |
Fei Yan | 3 | 28 | 9.01 |
Zhiqiang Zeng | 4 | 139 | 16.35 |
Chaoqun Hong | 5 | 324 | 13.19 |