Title
MaxUp: Lightweight Adversarial Training with Data Augmentation Improves Neural Network Training
Abstract
We propose MaxUp, a simple and effective technique for improving the generalization performance of machine learning models, especially deep neural networks. The idea is to generate a set of augmented data with some random perturbations or transforms, and minimize the maximum, or worst case loss over the augmented data. By doing so, we implicitly introduce a smoothness or robustness regularization against the random perturbations, and hence improve the generation performance. For example, in the case of Gaussian perturbation, MaxUp is asymptotically equivalent to using the gradient norm of the loss as a penalty to encourage smoothness. We test MaxUp on a range of tasks, including image classification, 3D point cloud classification, and adversarial certification, on which MaxUp consistently outperforms the baseline methods, without introducing substantial computational overhead. In particular, we improve ImageNet classification from the top-1 accuracy 85.5% without extra data to 85.8%.
Year
DOI
Venue
2021
10.1109/CVPR46437.2021.00250
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021
DocType
ISSN
Citations 
Conference
1063-6919
0
PageRank 
References 
Authors
0.34
4
4
Name
Order
Citations
PageRank
ChengYue Gong122.74
Tongzheng Ren235.46
Mao Ye301.69
Liu, Qiang447248.61