Title | ||
---|---|---|
Capture Human Disagreement Distributions by Calibrated Networks for Natural Language Inference |
Abstract | ||
---|---|---|
Natural Language Inference (NLI) datasets contain examples with highly ambiguous labels due to its subjectivity. Several recent efforts have been made to acknowledge and embrace the existence of ambiguity, and explore how to capture the human disagreement distribution. In contrast with directly learning from gold ambiguity labels, relying on special resource, we argue that the model has naturally captured the human ambiguity distribution as long as it's calibrated, i.e. the predictive probability can reflect the true correctness likelihood. Our experiments show that when model is well-calibrated, either by label smoothing or temperature scaling, it can obtain competitive performance as prior work, on both divergence scores between predictive probability and the true human opinion distribution, and the accuracy. This reveals the overhead of collecting gold ambiguity labels can be cut, by broadly solving how to calibrate the NLI network. |
Year | DOI | Venue |
---|---|---|
2022 | 10.18653/v1/2022.findings-acl.120 | FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022) |
DocType | Volume | Citations |
Conference | Findings of the Association for Computational Linguistics: ACL 2022 | 0 |
PageRank | References | Authors |
0.34 | 0 | 8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yuxia Wang | 1 | 0 | 0.68 |
Minghan Wang | 2 | 0 | 2.03 |
Yimeng Chen | 3 | 0 | 4.06 |
Shimin Tao | 4 | 0 | 4.73 |
Jiaxin Guo | 5 | 0 | 1.69 |
Chang Su | 6 | 0 | 3.38 |
Min Zhang | 7 | 1849 | 157.00 |
Hao Yang | 8 | 0 | 7.44 |