Title
Why Does a Hilbertian Metric Work Efficiently in Online Learning With Kernels?
Abstract
The autocorrelation matrix of the kernelized input vector is well approximated by the squared Gram matrix (scaled down by the dictionary size). This holds true under the condition that the input covariance matrix in the feature space is approximated by its sample estimate based on the dictionary elements, leading to a couple of fundamental insights into online learning with kernels. First, the eig...
Year
DOI
Venue
2016
10.1109/LSP.2016.2598615
IEEE Signal Processing Letters
Keywords
Field
DocType
Dictionaries,Kernel,Covariance matrices,Measurement,Correlation,Estimation,Eigenvalues and eigenfunctions
Kernel (linear algebra),Mathematical optimization,Pattern recognition,Radial basis function kernel,Kernel embedding of distributions,Autocorrelation matrix,Polynomial kernel,Artificial intelligence,Hyperplane,Covariance matrix,Mathematics,Eigenvalues and eigenvectors
Journal
Volume
Issue
ISSN
23
10
1070-9908
Citations 
PageRank 
References 
2
0.36
19
Authors
2
Name
Order
Citations
PageRank
Masahiro Yukawa127230.44
Klaus-Robert Müller2127561615.17