Abstract | ||
---|---|---|
Minimal complexity machines (MCMs) are a class of hyperplane classifiers that try to minimize a tight bound on the Vapnik-Chervonenkis dimension. MCMs can be used both in the input space and in a higher dimensional feature space via the kernel trick. MCMs tend to produce very sparse solutions in comparison to support vector machines, often using three to ten times fewer support vectors. However, large datasets present significant challenges in terms of storage and operations on the kernel matrix. In this paper, we present a stochastic subgradient descent solver for large-scale machine learning with the MCM. The proposed approach uses an explicit feature map-based approximation of the kernel, to improve the scalability of the algorithm. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/TSMC.2017.2694321 | IEEE Trans. Systems, Man, and Cybernetics: Systems |
Keywords | Field | DocType |
Kernel,Support vector machines,Complexity theory,Training,Sparse matrices,Optimization,Sun | Graph kernel,Feature vector,Mathematical optimization,Radial basis function kernel,Computer science,Kernel embedding of distributions,Support vector machine,Polynomial kernel,Artificial intelligence,Kernel method,String kernel,Machine learning | Journal |
Volume | Issue | ISSN |
47 | 10 | 2168-2216 |
Citations | PageRank | References |
3 | 0.39 | 40 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Mayank Sharma | 1 | 168 | 22.18 |
jayadeva | 2 | 67 | 10.50 |
sumit soman | 3 | 20 | 7.53 |
Himanshu Pant | 4 | 7 | 2.45 |