Title
Rejecting mismatches of visual words by contextual descriptors
Abstract
The Bag-of-Visual-Words model has become a popular model in image retrieval and computer vision. But when the local features of the Interest Points (IPs) are transformed into visual words in this model, the discriminative power of the local features are reduced or compromised. To address this issue, in this paper, we propose a novel contextual descriptor for local features to improve its discriminative power. The proposed contextual descriptors encode the dominant orientation and directional relationships between the reference interest point (IP) and its context. A compact Boolean array is used to represent these contextual descriptors. Our experimental results show that the proposed contextual descriptors are more robust and compact than the existing contextual descriptors, and improve the matching accuracy of visual words, thus make the Bag-of-Visual-Words model become more suitable for image retrieval and computer vision tasks.
Year
DOI
Venue
2014
10.1109/ICARCV.2014.7064539
ICARCV
Keywords
Field
DocType
mismatch rejection,reference interest point,image matching,discriminative power improvement,boolean algebra,semi-local spatial similarity,compact boolean array,bag-of-visual-words,visual word,image retrieval,contextual descriptors,computer vision,matching accuracy improvement,local feature,contextual descriptor,visualization,image resolution,robustness,feature extraction
Bag-of-words model in computer vision,Pattern recognition,Computer science,Visualization,Image retrieval,Feature extraction,Robustness (computer science),Artificial intelligence,Contextual image classification,Discriminative model,Visual Word
Conference
ISSN
Citations 
PageRank 
2474-2953
1
0.39
References 
Authors
11
3
Name
Order
Citations
PageRank
Jin-liang Yao1123.72
Bing Yang2448.37
Qiuming Zhu310.39