Abstract | ||
---|---|---|
Multiple Kernel Learning (MKL) aims to seek a better result than single kernel learning by combining a compact set of sub-kernels. However, MKL with L1-norm easily discards the sub-kernels with complementary information and MKL with Lp-norm(p≥2) often gets the redundant solution. To address these problems, a Selective Multiple Kernel Learning (SMKL) method, inspired by Ensemble Learning (EL), is proposed. Comparing MKL with Lp-norm(p≥2), SMKL obtains a sparse solution by a pre-selection procedure. Comparing MKL with L1-norm, SMKL preserves the sub-kernels with complementary information by guaranteeing the high discrimination and large diversity of pre-selected sub-kernels. For quantifying the discrimination and diversity of sub-kernels, a new kernel evaluation is designed. SMKL reduces the scale of MKL optimization and saves the memory storing of the sub-kernels, which extends the scale of problem that MKL could solve. Specially, a fast SMKL method using L∞-norm constraint is focused, which needs no MKL optimization process. It means that the memory is hardly a limitation for MKL with the large scale problem. Experiments state that our method is effective for classification. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1016/j.patcog.2013.04.003 | Pattern Recognition |
Keywords | Field | DocType |
Ensemble learning,Kernel evaluation,Multiple kernel learning,Selective multiple kernel learning,Fast selective multiple kernel learning | Kernel (linear algebra),Radial basis function kernel,Pattern recognition,Multiple kernel learning,Compact space,Tree kernel,Polynomial kernel,Artificial intelligence,Ensemble learning,Machine learning,Mathematics | Journal |
Volume | Issue | ISSN |
46 | 11 | 0031-3203 |
Citations | PageRank | References |
9 | 0.48 | 13 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Tao Sun | 1 | 168 | 16.47 |
Licheng Jiao | 2 | 5698 | 475.84 |
Fang Liu | 3 | 1188 | 125.46 |
Shuang Wang | 4 | 316 | 39.83 |
Jie Feng | 5 | 247 | 20.11 |