Abstract | ||
---|---|---|
While people compare images using semantic concepts, computers compare images using low-level visual features that sometimes have little to do with these semantics. To reduce the gap between the high-level semantics of visual objects and the low-level features extracted from them, in this paper we develop a framework of learning similarity (LS) using neural networks for semantic image classification, where a LS-based k-nearest neighbors (k-NNL) classifier is employed to assign a label to an unknown image according to the majority of k most similar features. Experimental results on an image database show that the k-NNL classifier outperforms the Euclidean distance-based k-NN (k-NNE) classifier and back-propagation network classifiers (BPNC). |
Year | DOI | Venue |
---|---|---|
2005 | 10.1016/j.neucom.2004.10.114 | Neurocomputing |
Keywords | Field | DocType |
Neural networks,Learning similarity,Image classification,k-NN rule,Robustness | Visual Objects,Pattern recognition,Computer science,Euclidean distance,Robustness (computer science),Artificial intelligence,Image database,Contextual image classification,Artificial neural network,Classifier (linguistics),Semantics,Machine learning | Journal |
Volume | ISSN | Citations |
67 | 0925-2312 | 6 |
PageRank | References | Authors |
0.48 | 4 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Dianhui Wang | 1 | 1547 | 93.41 |
Joon Shik Lim | 2 | 51 | 6.39 |
Myung-Mook Han | 3 | 13 | 4.64 |
Byungwook Lee | 4 | 100 | 15.07 |