Title
Near-Duplicate Image Retrieval Based on Contextual Descriptor
Abstract
The state of the art of technology for near-duplicate image retrieval is mostly based on the Bag-of-Visual-Words model. However, visual words are easy to result in mismatches because of quantization errors of the local features the words represent. In order to improve the precision of visual words matching, contextual descriptors are designed to strengthen their discriminative power and measure the contextual similarity of visual words. This paper presents a new contextual descriptor that measures the contextual similarity of visual words to immediately discard the mismatches and reduce the count of candidate images. The new contextual descriptor encodes the relationships of dominant orientation and spatial position between the referential visual words and their context. Experimental results on benchmark Copydays dataset demonstrate its efficiency and effectiveness for near-duplicate image retrieval.
Year
DOI
Venue
2015
10.1109/LSP.2014.2377795
IEEE Signal Process. Lett.
Keywords
Field
DocType
spatial position,contextual similarity,quantization errors,local features,image matching,near-duplicate image retrieval,discriminative power,visual databases,visual word,contextual descriptor,bag-of-visual-words model,image retrieval,dominant orientation,spatial constraint,copydays dataset,visual words matching,referential visual,indexing,visualization,feature extraction,image resolution
Pattern recognition,Computer science,Visualization,Search engine indexing,Image retrieval,Feature extraction,Artificial intelligence,Quantization (signal processing),Contextual image classification,Discriminative model,Visual Word
Journal
Volume
Issue
ISSN
22
9
1070-9908
Citations 
PageRank 
References 
7
0.55
9
Authors
3
Name
Order
Citations
PageRank
Jin-liang Yao1123.72
Bing Yang2448.37
Qiuming Zhu370.89