Abstract | ||
---|---|---|
The traditional multiple kernel learning (MKL) is usually based on implicit kernel mapping and adopts a certain combination of kernels instead of a single kernel. MKL has been demonstrated to have a significant advantage to the single-kernel learning. Although MKL sets different weights to different kernels, the weights are not changed over the whole input space. This weight setting might not been fit for those data with some underlying local distributions. In order to solve this problem, Gönen and Alpaydi{dotless}n (2008) introduced a localizing gating model into the traditional MKL framework so as to assign different weights to a kernel in different regions of the input space. In this paper, we also integrate the localizing gating model into our previous work named MultiK-MHKS that is an effective multiple empirical kernel learning. In doing so, we can get multiple localized empirical kernel learning named MLEKL. Our contribution is that we first establish a localized formulation in the empirical kernel learning framework. The experimental results on benchmark data sets validate the effectiveness of the proposed MLEKL. © 2012 Springer-Verlag London Limited. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1007/s00521-012-1161-5 | Neural Computing and Applications |
Keywords | Field | DocType |
empirical kernel mapping,gating model,implicit kernel mapping,local information,multiple kernel learning,pattern classification,Multiple kernel learning,Implicit kernel mapping,Empirical kernel mapping,Local information,Gating model,Pattern classification | Radial basis function kernel,Tree kernel,Kernel principal component analysis,Polynomial kernel,Artificial intelligence,String kernel,Mathematical optimization,Pattern recognition,Kernel embedding of distributions,Multiple kernel learning,Variable kernel density estimation,Mathematics,Machine learning | Journal |
Volume | Issue | ISSN |
23 | 7-8 | 1433-3058 |
Citations | PageRank | References |
3 | 0.37 | 23 |
Authors | ||
4 |