Abstract | ||
---|---|---|
Manifold learning and dimensional reduction methods provide a low dimensional embedding for a collection of training samples. These methods are based on the eigenvalue decomposition of the kernel matrix formed using the training samples. In [2] the embedding is extended to new test samples using the Nystrom approximation method. This paper addresses the pre-image problem for these methods, which is to find the mapping back from the embedding space to the input space for new test points. The relationship of these learning methods to kernel principal component analysis [6] and the connection of the out-of-sample problem to the pre-image problem [1] is used to provide the pre-image. |
Year | DOI | Venue |
---|---|---|
2010 | 10.1109/ICMLA.2010.146 | Machine Learning and Applications |
Keywords | Field | DocType |
low dimensional,kernel matrix,manifold learning,embedding space,input space,kernel principal component analysis,pre-image problem,dimensional reduction methods,dimensional reduction method,out-of-sample problem,training sample,mathematical model,manifolds,learning artificial intelligence,covariance matrix,eigenvalue decomposition,kernel,principal component analysis,approximation theory,machine learning | Kernel (linear algebra),Embedding,Computer science,Kernel embedding of distributions,Manifold alignment,Kernel principal component analysis,Artificial intelligence,Dimensional reduction,Nonlinear dimensionality reduction,Manifold,Machine learning | Conference |
ISBN | Citations | PageRank |
978-1-4244-9211-4 | 1 | 0.35 |
References | Authors | |
10 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Omar Arif | 1 | 22 | 5.87 |
Patricio Vela | 2 | 17 | 2.10 |
Wayne Daley | 3 | 14 | 2.34 |