Abstract | ||
---|---|---|
Kernel matrices are popular in machine learning and scientific computing, but they are limited by their quadratic complexity in both construction and storage. It is well-known that as one varies the kernel parameter, e.g., the width parameter in radial basis function kernels, the kernel matrix changes from a smooth low-rank kernel to a diagonally-dominant and then fully-diagonal kernel. Low-rank approximation methods have been widely-studied, mostly in the first case, to reduce the memory storage and the cost of computing matrix-vector products. Here, we use ideas from scientific computing to propose an extension of these methods to situations where the matrix is not well-approximated by a low-rank matrix. In particular, we construct an efficient block low-rank approximation method---which we call the Block Basis Factorization---and we show that it has $\mathcal{O}(n)$ complexity in both time and memory. Our method works for a wide range of kernel parameters, extending the domain of applicability of low-rank approximation methods, and our empirical results demonstrate the stability (small standard deviation in error) and superiority over current state-of-art kernel approximation algorithms. |
Year | Venue | Field |
---|---|---|
2015 | CoRR | Mathematical optimization,Radial basis function kernel,Kernel embedding of distributions,Kernel principal component analysis,Tree kernel,Polynomial kernel,Artificial intelligence,String kernel,Variable kernel density estimation,Mathematics,Machine learning,Kernel (statistics) |
DocType | Volume | Citations |
Journal | abs/1505.00398 | 4 |
PageRank | References | Authors |
0.41 | 12 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ruoxi Wang | 1 | 6 | 3.14 |
yingzhou li | 2 | 4 | 0.41 |
Michael W. Mahoney | 3 | 3297 | 218.10 |
Eric Darve | 4 | 440 | 44.79 |