Title
The role of dimensionality reduction in linear classification.
Abstract
Dimensionality reduction (DR) is often used as a preprocessing step in classification, but usually one first fixes the DR mapping, possibly using label information, and then learns a classifier (a filter approach). Best performance would be obtained by optimizing the classification error jointly over DR mapping and classifier (a wrapper approach), but this is a difficult nonconvex problem, particularly with nonlinear DR. Using the method of auxiliary coordinates, we give a simple, efficient algorithm to train a combination of nonlinear DR and a classifier, and apply it to a RBF mapping with a linear SVM. This alternates steps where we train the RBF mapping and a linear SVM as usual regression and classification, respectively, with a closed-form step that coordinates both. The resulting nonlinear low-dimensional classifier achieves classification errors competitive with the state-of-the-art but is fast at training and testing, and allows the user to trade off runtime for classification accuracy easily. We then study the role of nonlinear DR in linear classification, and the interplay between the DR mapping, the number of latent dimensions and the number of classes. When trained jointly, the DR mapping takes an extreme role in eliminating variation: it tends to collapse classes in latent space, erasing all manifold structure, and lay out class centroids so they are linearly separable with maximum margin.
Year
Venue
Field
2014
CoRR
Linear separability,Dimensionality reduction,Nonlinear system,Regression,Pattern recognition,Preprocessor,Artificial intelligence,Linear classifier,Classifier (linguistics),Centroid,Machine learning,Mathematics
DocType
Volume
Citations 
Journal
abs/1405.6444
0
PageRank 
References 
Authors
0.34
11
2
Name
Order
Citations
PageRank
Weiran Wang11149.99
Miguel Á. Carreira-Perpiñán21198109.31