Title
Benchmarking result diversification in social image retrieval
Abstract
This article addresses the issue of retrieval result diversification in the context of social image retrieval and discusses the results achieved during the MediaEval 2013 benchmarking. 38 runs and their results are described and analyzed in this text. A comparison of the use of expert vs. crowdsourcing annotations shows that crowdsourcing results are slightly different and have higher inter observer differences but results are comparable at lower cost. Multimodal approaches have best results in terms of cluster recall. Manual approaches can lead to high precision but often lower diversity. With this detailed results analysis we give future insights on this matter.
Year
DOI
Venue
2014
10.1109/ICIP.2014.7025621
Image Processing
Keywords
Field
DocType
image retrieval,social networking (online),MediaEval 2013 benchmarking,crowdsourcing annotations,expert annotations,retrieval result diversification,social image retrieval,crowdsourcing,image content description,re-ranking,result diversification,social photo retrieval
Information retrieval,Crowdsourcing,Computer science,Diversification (marketing strategy),Observer (quantum physics),Recall,Social image,Benchmarking
Conference
ISSN
Citations 
PageRank 
1522-4880
1
0.35
References 
Authors
15
5
Name
Order
Citations
PageRank
Bogdan Ionescu145856.67
Adrian Popescu2927.19
Henning Müller32538218.89
María Menéndez4211.19
Anca-Livia Radu5453.79