Title
Error Correction in Learning using SVMs
Abstract
This paper is concerned with learning binary classifiers under adversarial label-noise. We introduce the problem of error-correction in learning where the goal is to recover the original clean data from a label-manipulated version of it, given (i) no constraints on the adversary other than an upper-bound on the number of errors, and (ii) some regularity properties for the original data. We present a simple and practical error-correction algorithm called SubSVMs that learns individual SVMs on several small-size (log-size), class-balanced, random subsets of the data and then reclassifies the training points using a majority vote. Our analysis reveals the need for the two main ingredients of SubSVMs, namely class-balanced sampling and subsampled bagging. Experimental results on synthetic as well as benchmark UCI data demonstrate the effectiveness of our approach. In addition to noise-tolerance, log-size subsampled bagging also yields significant run-time benefits over standard SVMs.
Year
Venue
Field
2013
CoRR
Pattern recognition,Computer science,Support vector machine,Error detection and correction,Artificial intelligence,Sampling (statistics),Majority rule,Machine learning,Binary number
DocType
Volume
Citations 
Journal
abs/1301.2012
2
PageRank 
References 
Authors
0.38
0
3
Name
Order
Citations
PageRank
Srivatsan Laxman142121.65
Sushil Mittal2895.45
Ramarathnam Venkatesan31326111.13