Title
Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
Abstract
Data augmentation is an effective technique to improve the generalization of deep neural networks. However, previous data augmentation methods usually treat the augmented samples equally without considering their individual impacts on the model. To address this, for the augmented samples from the same training example, we propose to assign different weights to them. We construct the maximal expected loss which is the supremum over any reweighted loss on augmented samples. Inspired by adversarial training, we minimize this maximal expected loss (MMEL) and obtain a simple and interpretable closed-form solution: more attention should be paid to augmented samples with large loss values (i.e., harder examples). Minimizing this maximal expected loss enables the model to perform well under any reweighting strategy. The proposed method can generally be applied on top of any data augmentation methods. Experiments are conducted on both natural language understanding tasks with token-level data augmentation, and image classification tasks with commonly-used image augmentation techniques like random crop and horizontal flip. Empirical results show that the proposed method improves the generalization performance of the model.
Year
Venue
DocType
2021
ICLR
Conference
ISSN
Citations 
PageRank 
published in ICLR2021
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Mingyang Yi101.01
lu hou2626.80
Lifeng Shang348530.96
Xin Jiang415032.43
Qun Liu52149203.11
Zhi-Ming Ma622718.26