Title
Kernel map compression for speeding the execution of kernel-based methods.
Abstract
The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss.
Year
DOI
Venue
2011
10.1109/TNN.2011.2127485
IEEE Transactions on Neural Networks
Keywords
DocType
Volume
kernel representation,computationally efficient execution procedure,two-step procedure,empirical kernel,kernel map compression,statistical learning theory,mercer kernel method,computational complexity,kernel space,kernel-based methods,strong learning capability,kernel principal component analysis,artificial intelligence,artificial neural network,optimization,artificial neural networks,kernel methods,support vector machines,data compression,algorithms,learning artificial intelligence,support vector machine,radial basis functions,machine learning,clustering algorithms,principal component analysis,radial basis function,computer simulation,kernel method,kernel
Journal
22
Issue
ISSN
Citations 
6
1941-0093
2
PageRank 
References 
Authors
0.38
20
2
Name
Order
Citations
PageRank
Omar Arif1225.87
Patricio A Vela2462.91