Title
Extreme vector machine for fast training on large data
Abstract
Quite often, different types of loss functions are adopted in SVM or its variants to meet practical requirements. How to scale up the corresponding SVMs for large datasets are becoming more and more important in practice. In this paper, extreme vector machine (EVM) is proposed to realize fast training of SVMs with different yet typical loss functions on large datasets. EVM begins with a fast approximation of the convex hull, expressed by extreme vectors, of the training data in the feature space, and then completes the corresponding SVM optimization over the extreme vector set. When hinge loss function is adopted, EVM is the same as the approximate extreme points support vector machine (AESVM) for classification. When square hinge loss function, least squares loss function and Huber loss function are adopted, EVM corresponds to three versions, namely, L2-EVM, LS-EVM and Hub-EVM, respectively, for classification or regression. In contrast to the most related machine AESVM, with the retainment of its theoretical advantage, EVM is distinctive in its applicability to a wide variety of loss functions to meet practical requirements. Compared with the other state-of-the-art fast training algorithms CVM and FastKDE of SVMs, EVM indeed relaxes the limitation of least squares loss functions, and experimentally exhibits its superiority in training time, robustness capability and number of support vectors.
Year
DOI
Venue
2020
10.1007/s13042-019-00936-3
International Journal of Machine Learning and Cybernetics
Keywords
Field
DocType
Support vector machine, Convex hull, Loss functions, Fast training
Least squares,Extreme point,Feature vector,Hinge loss,Computer science,Support vector machine,Convex hull,Algorithm,Robustness (computer science),Huber loss
Journal
Volume
Issue
ISSN
11
1
1868-808X
Citations 
PageRank 
References 
0
0.34
23
Authors
3
Name
Order
Citations
PageRank
Xiaoqing Gu1449.30
Fu-lai Chung224434.50
Shitong Wang31485109.13