Abstract | ||
---|---|---|
Clustering algorithms require a large amount of computations of distances among patterns and centers of clusters. Hence, their complexity is dominated by the number of patterns. On the other hand, there is an explosive growth of business or scientific databases storing huge volumes of data. One of the main challenges of today's knowledge discovery systems is their ability to scale up to very large data sets. In this paper, we present a clustering methodology for scaling up any clustering algorithm. It is an iterative process that it is based on partitioning a sample of data into subsets. We, also, present extensive empirical tests that demonstrate the proposed methodology reduces the time complexity and at the same time may maintain the accuracy that would be achieved by a single clustering algorithm supplied with all the data. |
Year | DOI | Venue |
---|---|---|
2002 | 10.1016/S0167-8655(02)00031-4 | Pattern Recognition Letters |
Keywords | Field | DocType |
clustering process,clustering algorithm,data mining,parallel processing,single clustering algorithm,present extensive empirical test,large amount,distributed computation,clustering methodology,huge volume,time complexity,large data set,proposed methodology,meta-learning,clustering,explosive growth,knowledge discovery,distributed computing | Data mining,Canopy clustering algorithm,Fuzzy clustering,CURE data clustering algorithm,Data stream clustering,Correlation clustering,Computer science,Determining the number of clusters in a data set,Artificial intelligence,Constrained clustering,Cluster analysis,Machine learning | Journal |
Volume | Issue | ISSN |
23 | 8 | Pattern Recognition Letters |
Citations | PageRank | References |
16 | 1.02 | 16 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
B. Boutsinas | 1 | 82 | 5.59 |
T. Gnardellis | 2 | 16 | 1.02 |