Abstract | ||
---|---|---|
Unlike two-class (multi-class) support vector machines, massive targets and few outliers are available in one-class support vector machine. The strategies to select useful data for two-class (multi-class) support vector machines are not suitable for one-class support vector machine. In this paper, relative density degree is introduced to select useful data for one-class support vector machine. These data would become support vectors after training and locate near the boundary of the data distribution. The relative density degree of the data near the boundary of the training set is smaller than that of the data in the interior of the training set. Thus, the data near the boundary of training set can be preserved and the others can be disposed through relative density degree. Experimental results show that merely preserving about 20 % of the training set, the performance will not decrease and be better than previous related method. But the model is simpler and the training process is faster. |
Year | DOI | Venue |
---|---|---|
2016 | 10.1007/s00500-015-1757-7 | Soft Computing - A Fusion of Foundations, Methodologies and Applications |
Keywords | Field | DocType |
Relative density degree, Training set selection, One-class SVM, One-class classification | Training set,Structured support vector machine,One-class classification,Pattern recognition,Computer science,Support vector machine,Relative density,Outlier,Boundary detection,Artificial intelligence,Relevance vector machine,Machine learning | Journal |
Volume | Issue | ISSN |
20 | 11 | 1432-7643 |
Citations | PageRank | References |
3 | 0.37 | 35 |
Authors | ||
6 |