Abstract | ||
---|---|---|
We present a method for efficiently training binary and multiclass kernelized SVMs on a Graphics Processing Unit (GPU). Our methods apply to a broad range of kernels, including the popular Gaus- sian kernel, on datasets as large as the amount of available memory on the graphics card. Our approach is distinguished from earlier work in that it cleanly and efficiently handles sparse datasets through the use of a novel clustering technique. Our optimization algorithm is also specifically designed to take advantage of the graphics hardware. This leads to different algorithmic choices then those preferred in serial implementations. Our easy-to-use library is orders of magnitude faster then existing CPU libraries, and several times faster than prior GPU approaches. |
Year | DOI | Venue |
---|---|---|
2011 | 10.1145/2020408.2020548 | KDD |
Keywords | Field | DocType |
sparse datasets,kernelized svms,graphics processing unit,earlier work,cpu library,available memory,broad range,gpu-tailored approach,prior gpu approach,graphics hardware,different algorithmic choice,graphics card,gpgpu | Data mining,Graphics hardware,Computer science,Artificial intelligence,Cluster analysis,Binary number,Graphics,Kernel (linear algebra),Support vector machine,Parallel computing,General-purpose computing on graphics processing units,Graphics processing unit,Machine learning | Conference |
Citations | PageRank | References |
27 | 1.16 | 9 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Andrew Cotter | 1 | 851 | 78.35 |
Nathan Srebro | 2 | 3892 | 349.42 |
Joseph Keshet | 3 | 925 | 69.84 |