Title
Capture Human Disagreement Distributions by Calibrated Networks for Natural Language Inference
Abstract
Natural Language Inference (NLI) datasets contain examples with highly ambiguous labels due to its subjectivity. Several recent efforts have been made to acknowledge and embrace the existence of ambiguity, and explore how to capture the human disagreement distribution. In contrast with directly learning from gold ambiguity labels, relying on special resource, we argue that the model has naturally captured the human ambiguity distribution as long as it's calibrated, i.e. the predictive probability can reflect the true correctness likelihood. Our experiments show that when model is well-calibrated, either by label smoothing or temperature scaling, it can obtain competitive performance as prior work, on both divergence scores between predictive probability and the true human opinion distribution, and the accuracy. This reveals the overhead of collecting gold ambiguity labels can be cut, by broadly solving how to calibrate the NLI network.
Year
DOI
Venue
2022
10.18653/v1/2022.findings-acl.120
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022)
DocType
Volume
Citations 
Conference
Findings of the Association for Computational Linguistics: ACL 2022
0
PageRank 
References 
Authors
0.34
0
8
Name
Order
Citations
PageRank
Yuxia Wang100.68
Minghan Wang202.03
Yimeng Chen304.06
Shimin Tao404.73
Jiaxin Guo501.69
Chang Su603.38
Min Zhang71849157.00
Hao Yang807.44