Abstract | ||
---|---|---|
The world wide web is filled with billions of images and duplicates of images can frequently be found on many websites. These duplicates can be exact copies or differ slightly in their visual content. In this paper we provide a comparative study on how well content-based duplicate image detection methods are able to detect the duplicates of a query image. We conduct a survey to better understand in which ways such images on the internet differ from each other and use these observations to form a realistic and challenging duplicate image detection scenario. The methods we evaluate in our study are representative techniques from the research literature. In our evaluation, we target the performance of each method in relation to their descriptor size, description time and matching time, to assess their feasibility of application to large image collections (> 1 million). |
Year | DOI | Venue |
---|---|---|
2013 | 10.1109/ICME.2013.6607451 | Multimedia and Expo |
Keywords | Field | DocType |
Internet,image processing,image retrieval,Internet,Web search,Websites,World Wide Web,content-based duplicate image detection methods,large image collections,query image,Content-based duplicate image detection,image redundancy,web search | Image map,Computer vision,Automatic image annotation,Information retrieval,Image detection,Computer science,Image retrieval,Image processing,Digital image,Artificial intelligence,Digital image processing,The Internet | Conference |
ISSN | Citations | PageRank |
1945-7871 | 5 | 0.40 |
References | Authors | |
8 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bart Thomee | 1 | 773 | 39.96 |
Mark J. Huiskes | 2 | 922 | 34.00 |
Erwin M. Bakker | 3 | 378 | 41.20 |
Michael S. Lew | 4 | 2742 | 166.02 |