Title
Understanding Deep Learning Performance through an Examination of Test Set Difficulty: A Psychometric Case Study.
Abstract
Interpreting the performance of deep learning models beyond test set accuracy is challenging. Characteristics of individual data points are often not considered during evaluation, and each data point is treated equally. We examine the impact of a test set questionu0027s difficulty to determine if there is a relationship between difficulty and performance. We model difficulty using well-studied psychometric methods on human response patterns. Experiments on Natural Language Inference (NLI) and Sentiment Analysis (SA) show that the likelihood of answering a question correctly is impacted by the questionu0027s difficulty. As DNNs are trained with more data, easy examples are learned more quickly than hard examples.
Year
DOI
Venue
2018
10.18653/v1/d18-1500
EMNLP
Field
DocType
Volume
Computer science,Natural language processing,Artificial intelligence,Deep learning,Machine learning,Test set
Conference
2018
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
John Lalor1154.63
Hao Wu29238.83
Tsendsuren Munkhdalai316913.49
Hong Yu41982179.13