Title
Evaluation Examples Are Not Equally Informative: How Should That Change NLP Leaderboards?
Abstract
Leaderboards are widely used in NLP and push the field forward. While leaderboards are a straightforward ranking of NLP models, this simplicity can mask nuances in evaluation items (examples) and subjects (NLP models). Rather than replace leaderboards, we advocate a re-imagining so that they better highlight if and where progress is made. Building on educational testing, we create a Bayesian leaderboard model where latent subject skill and latent item difficulty predict correct responses. Using this model, we analyze the ranking reliability of leaderboards. Afterwards, we show the model can guide what to annotate, identify annotation errors, detect overfitting, and identify informative examples. We conclude with recommendations for future benchmark tasks.
Year
DOI
Venue
2021
10.18653/v1/2021.acl-long.346
59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1
DocType
Volume
Citations 
Conference
2021.acl-long
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Pedro Rodriguez142.41
Joe Barrow200.34
Alexander Miserlis Hoyle300.34
John Lalor4154.63
Robin Jia522712.53
Jordan L. Boyd-Graber6668.40