Abstract | ||
---|---|---|
The k-nearest-neighbor (kNN) algorithm is a simple but effective classification method which predicts the class label of a query sample based on information contained in its neighborhood. Previous versions of kNN usually consider the k nearest neighbors separately by the quantity or distance information. However, the quantity and the isolated distance information may be insufficient for effective classification decision. This paper investigates the kNN method from a perspective of local distribution based on which we propose an improved implementation of kNN. The proposed method performs the classification task by assigning the query sample to the class with the maximum posterior probability which is estimated from the local distribution based on the Bayesian rule. Experiments have been conducted using 15 benchmark datasets and the reported experimental results demonstrate excellent performance and robustness for the proposed method when compared to other state-of-the-art classifiers. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1007/978-3-319-18038-0_19 | ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PART I |
Keywords | Field | DocType |
Classification,Nearest neighbors,Local distribution,Posterior probability | Nearest neighbour algorithm,k-nearest neighbors algorithm,Data mining,Pattern recognition,Computer science,Posterior probability,Robustness (computer science),Artificial intelligence,Machine learning,Bayesian probability | Conference |
Volume | ISSN | Citations |
9077 | 0302-9743 | 2 |
PageRank | References | Authors |
0.38 | 8 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Chengsheng Mao | 1 | 12 | 3.32 |
Bin Hu | 2 | 778 | 107.21 |
Philip Moore | 3 | 41 | 3.99 |
Yun Su | 4 | 21 | 2.47 |
Manman Wang | 5 | 14 | 2.73 |