Abstract | ||
---|---|---|
We present two online gradient learning algorithms to design condensed k-nearest neighbor (NN) classifiers. The goal of these
learning procedures is to minimize a measure of performance closely related to the expected misclassification rate of the
k-NN classifier. One possible implementation of the algorithm is given. Converge properties are analyzed and connections with
other works are established. We compare these learning procedures with Kononen’s LVQ algorithms [7] and k-NN classification
using the handwritten NIST databases [5]. Experimental results demonstrate the potential of the proposed learning algorithms.
|
Year | DOI | Venue |
---|---|---|
1999 | 10.1007/BFb0098212 | IWANN (1) |
Keywords | Field | DocType |
k-nearest neighbor classifiers,on-line gradient learning algorithms,k nearest neighbor | Instance-based learning,Active learning (machine learning),Computer science,Empirical risk minimization,Artificial intelligence,Classifier (linguistics),Learning classifier system,k-nearest neighbors algorithm,Stability (learning theory),Pattern recognition,Learning vector quantization,Algorithm,Machine learning | Conference |
Volume | ISSN | ISBN |
1606 | 0302-9743 | 3-540-66069-0 |
Citations | PageRank | References |
0 | 0.34 | 3 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
S. Bermejo | 1 | 87 | 12.49 |
joan cabestany | 2 | 1276 | 143.82 |