Title
Quantitative analysis of automatic performance evaluation systems based on the h -index
Abstract
Since the h-index has been invented, it is the most frequently discussed bibliometric value and one of the most commonly used metrics to quantify a researcher’s scientific output. The more it is increasingly gaining popularity to use the metric as an indication of the quality of a job applicant or an employee the more important it is to assure its correctitude. Many platforms offer the h-index of a scientist as a service, sometimes without the explicit knowledge of the respective person. In this article we show that looking up the h-index for a researcher on the five most commonly used platforms, namely AMiner, Google Scholar, ResearchGate, Scopus and Web of Science, results in a variance that is in many cases as large as the average value. This is due to the varying definitions of what a scientific article is, the underlying data basis, and different qualities of the entity recognition problem. To perform our study, we crawled the h-index of the worlds top researchers according to two different rankings, all the Nobel Prize laureates except Literature and Peace, and the teaching staff of the computer science department of the TU Kaiserslautern Germany with whom we additionally computed their h-index manually. Thus we showed that the individual h-indices differ to an alarming extent between the platforms. We observed that researchers with an extraordinary high h-index and researchers with an index appropriate to the scientific career path and the respective scientific field are affected alike by these problems.
Year
DOI
Venue
2020
10.1007/s11192-020-03407-7
Scientometrics
Keywords
DocType
Volume
Bibliometrics, Big data, h-index
Journal
123
Issue
ISSN
Citations 
2
0138-9130
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Marc P. Hauer101.69
Xavier C. R. Hofmann200.34
Tobias D. Krafft341.39
Katharina Anna Zweig48116.32