Abstract | ||
---|---|---|
Cluster analysis is sensitive to noise variables intrinsically contained within high dimensional data sets. As the size of data sets increases, clustering techniques robust to noise variables must be identified. This investigation gauges the capabilities of recent clustering algorithms applied to two real data sets increasingly perturbed by superfluous noise variables. The recent techniques include mixture models of factor analysers and auto-associative multivariate regression trees. Statistical techniques are integrated to create two approaches useful for clustering noisy data: multivariate regression trees with principal component scores and multivariate regression trees with factor scores. The tree techniques generate the superior clustering results. |
Year | DOI | Venue |
---|---|---|
2006 | 10.1016/j.patcog.2005.09.003 | Pattern Recognition |
Keywords | Field | DocType |
superior clustering result,noise variable,auto-associative multivariate regression tree,multivariate regression tree,superfluous noise variable,recent clustering,noisy data,clustering technique,reduced dimension space,high dimensional data set,multivariate regression,mixture model,high dimensional data,principal component,cluster analysis,dimension reduction | Data set,Dimensionality reduction,Pattern recognition,Multivariate statistics,Data matrix (multivariate statistics),Bayesian multivariate linear regression,Artificial intelligence,Cluster analysis,Mathematics,Principal component analysis,Mixture model | Journal |
Volume | Issue | ISSN |
39 | 3 | Pattern Recognition |
Citations | PageRank | References |
6 | 0.57 | 1 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Christine Smyth | 1 | 7 | 0.93 |
Danny Coomans | 2 | 105 | 19.07 |
Yvette Everingham | 3 | 6 | 1.59 |