Title
A General Framework For Adversarial Label Learning
Abstract
We consider the task of training classifiers without fully labeled data. We propose a weakly supervised method-adversarial label learning-that trains classifiers to perform well when noisy and possibly correlated labels are provided. Our framework allows users to provide different weak labels and multiple constraints on these labels. Our model then attempts to learn parameters for the data by solving a zero-sum game for the binary problems and a non-zero sum game optimization for multi-class problems. The game is between an adversary that chooses labels for the data and a model that minimizes the error made by the adversarial labels. The weak supervision constrains what labels the adversary can choose. The method therefore minimizes an upper bound of the classifier's error rate using projected primal-dual subgradient descent. Minimizing this bound protects against bias and dependencies in the weak supervision. We first show the performance of our framework on binary classification tasks then we extend our algorithm to show its performance on multiclass datasets. Our experiments show that our method can train without labels and outperforms other approaches for weakly supervised learning.
Year
DOI
Venue
2021
v22/20-537.html
JOURNAL OF MACHINE LEARNING RESEARCH
Keywords
DocType
Volume
Weak Supervision, Adversarial Learning, Unsupervised Learning, Constraint Learning, Lagrangian Optimization
Journal
22
Issue
ISSN
Citations 
1
1532-4435
0
PageRank 
References 
Authors
0.34
0
2
Name
Order
Citations
PageRank
Chidubem Arachie100.68
Bert Huang256339.09