Title
Evaluation Method, Dataset Size or Dataset Content: How to Evaluate Algorithms for Image Matching?
Abstract
Most vision papers have to include some evaluation work in order to demonstrate that the algorithm proposed is an improvement on existing ones. Generally, these evaluation results are presented in tabular or graphical forms. Neither of these is ideal because there is no indication as to whether any performance differences are statistically significant. Moreover, the size and nature of the dataset used for evaluation will obviously have a bearing on the results, and neither of these factors are usually discussed. This paper evaluates the effectiveness of commonly used performance characterization metrics for image feature detection and description for matching problems and explores the use of statistical tests such as McNemar’s test and ANOVA as better alternatives.
Year
DOI
Venue
2016
https://doi.org/10.1007/s10851-015-0626-4
Journal of Mathematical Imaging and Vision
Keywords
Field
DocType
Performance characterization,Feature matching,Homography
Data mining,Computer vision,McNemar's test,Feature detection,Image matching,Computer science,Algorithm,Feature matching,Homography,Artificial intelligence,Statistical hypothesis testing
Journal
Volume
Issue
ISSN
55
3
0924-9907
Citations 
PageRank 
References 
1
0.43
22
Authors
3
Name
Order
Citations
PageRank
Nadia Kanwal1597.00
Erkan Bostanci2659.18
Adrian F. Clark322172.99