Title
Approximate kernel matrix computation on GPUs forlarge scale learning applications
Abstract
Kernel-based learning methods require quadratic space and time complexities to compute the kernel matrix. These complexities limit the applicability of kernel methods to large scale problems with millions of data points. In this paper, we introduce a novel representation of kernel matrices on Graphics Processing Units (GPU). The novel representation exploits the sparseness of the kernel matrix to address the space complexity problem. It also respects the guidelines for memory access on GPUs, which are critical for good performance, to address the time complexity problem. Our representation utilizes the locality preserving properties of space filling curves to obtain a band approximation of the kernel matrix. To prove the validity of the representation, we use Affinity Propagation, an unsupervised clustering algorithm, as an example of kernel methods. Experimental results show a 40x speedup of AP using our representation without degradation in clustering performance.
Year
DOI
Venue
2009
10.1145/1542275.1542355
I4CS
Keywords
Field
DocType
gpus forlarge scale,kernel matrix,approximate kernel matrix computation,space complexity problem,novel representation,good performance,clustering performance,kernel method,time complexity problem,quadratic space,large scale problem,time complexity,sparse matrices,kernel methods,space complexity,affinity propagation,matrix computation
Radial basis function kernel,Computer science,Kernel embedding of distributions,Parallel computing,Tree kernel,Polynomial kernel,Kernel (image processing),Kernel method,String kernel,Variable kernel density estimation
Conference
Citations 
PageRank 
References 
0
0.34
4
Authors
2
Name
Order
Citations
PageRank
Mohamed E. Hussein128617.93
Wael Abd-Almageed224824.52