Title
SMO-Style Algorithms for Learning Using Privileged Information
Abstract
Recently Vapnik et al. [11, 12, 13] introduced a new learning model, called Learning Using Privileged Information (LUPI). In this model, along with standard training data, the teacher supplies the student with additional (privileged) information. In the optimistic case, the LUPI model can improve the bound for the probability of test error from O(1/ √ n) to O(1/n), where n is the number of training examples. Since semi-supervised learning model with n labeled and N unlabeled examples can only achieve the bound O(1/ √ n + N) in the optimistic case, the LUPI model can thus significantly outperform it. To implement LUPI model, Vapnik et al. [11, 12, 13] suggested to use an SVM-type algorithm called SVM+, which requires, however, to solve a more difficult optimization problem than the one that is traditionally used to solve SVM. In this paper we develop two new algorithms for solving the optimization problem of SVM+. Our algorithms have the structure similar to the empirically successful SMO algorithm for solving SVM. Our experiments show that in terms of the generalization error/running time tradeoff, one of our algorithms is superior over the widely used interior point optimizer.
Year
Venue
Field
2010
DMIN
Training set,Computer science,Support vector machine,Algorithm,Generalization error,Sequential minimal optimization,Interior point method,Optimization problem
DocType
Citations 
PageRank 
Conference
25
1.43
References 
Authors
7
4
Name
Order
Citations
PageRank
Dmitry Pechyony116211.09
Rauf Izmailov220036.96
Akshay Vashist317612.64
Vladimir Vapnik4160753397.91