Title
Not Far Away, Not So Close - Sample Efficient Nearest Neighbour Data Augmentation via MiniMax.
Abstract
Data augmentation in Natural Language Processing (NLP) often yields examples that are less human-interpretable. Recently, leveraging kNN such that augmented examples are retrieved from large repositories of unlabelled sentences has made a step toward interpretable augmentation. Inspired by this paradigm, we introduce MiniMax-kNN, a sample efficient data augmentation strategy. We exploit a semi-supervised approach based on knowledge distillation to train a model on augmented data. In contrast to existing kNN augmentation techniques that blindly incorporate all samples, our method dynamically selects a subset of augmented samples with respect to the maximum KL-divergence of the training loss. This step aims to extract the most efficient samples to ensure our augmented data covers regions in the input space with maximum loss value. These maximum loss regions are shrunk in our minimization step using augmented samples. We evaluated our technique on several text classification tasks and demonstrated that MiniMax-kNN consistently outperforms strong baselines. Our results show that MiniMax-kNN requires fewer augmented examples and less computation to achieve superior performance over the state-of-the-art kNN-based augmentation techniques.
Year
Venue
DocType
2021
ACL/IJCNLP
Conference
Volume
Citations 
PageRank 
2021.findings-acl
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Ehsan Kamalloo100.34
Mehdi Rezagholizadeh238.82
peyman passban365.14
Ali Ghodsi43306156.01