Title
End-to-End Self-Debiasing Framework for Robust NLU Training.
Abstract
Existing Natural Language Understanding (NLU) models have been shown to incorporate dataset biases leading to strong performance on in-distribution (ID) test sets but poor performance on out-of-distribution (OOD) ones. We introduce a simple yet effective debiasing framework whereby the shallow representations of the main model are used to derive a bias model and both models are trained simultaneously. We demonstrate on three well studied NLU tasks that despite its simplicity, our method leads to competitive OOD results. It significantly outperforms other debiasing approaches on two tasks, while still delivering high in-distribution performance.
Year
DOI
Venue
2021
10.18653/v1/2021.findings-acl.168
ACL/IJCNLP
DocType
Volume
ISSN
Conference
2021.findings-acl
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021; August; 2021; pages 1923--1929
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Abbas Ghaddar100.34
Phillippe Langlais200.68
Mehdi Rezagholizadeh338.82
Ahmad Azad Ab Rashid435.03