Abstract | ||
---|---|---|
The excessive computational resources required by the Nearest Neighbor rule are a major concern for a number of specialists and practitioners in the Pattern Recognition community. Many proposals for decreasing this computational burden, through reduction of the training sample size, have been published. This paper introduces an algorithm to reduce the training sample size while preserving the original decision boundaries as much as possible. Consequently, the algorithm tends to obtain classification accuracy close to that of the whole training sample. Several experimental results demonstrate the effectiveness of this method when compared to other reduction algorithms based on similar ideas. |
Year | DOI | Venue |
---|---|---|
2005 | 10.1142/S0218001405004332 | INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE |
Keywords | Field | DocType |
Nearest Neighbor rule, size reduction, classification accuracy, consistent subset, decision boundaries | Data mining,init,Best bin first,Computer science,Artificial intelligence,Nearest-neighbor chain algorithm,Large margin nearest neighbor,Decision boundary,Nearest neighbor search,k-nearest neighbors algorithm,Pattern recognition,Sample size determination,Machine learning | Journal |
Volume | Issue | ISSN |
19 | 6 | 0218-0014 |
Citations | PageRank | References |
32 | 0.94 | 22 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
R Barandela | 1 | 558 | 23.46 |
Francesc J. Ferri | 2 | 356 | 38.92 |
José Salvador Sánchez | 3 | 184 | 15.36 |