Abstract | ||
---|---|---|
In this paper we present a new method for Joint Feature Selection and Classifier Learning using a sparse Bayesian approach. These tasks are performed by optimizing a global loss function that includes a term associated with the empirical loss and another one representing a feature selection and regularization constraint on the parameters. To minimize this function we use a recently proposed technique, the Boosted Lasso algorithm, that follows the regularization path of the empirical risk associated with our loss function. We develop the algorithm for a well known non-parametrical classification method, the relevance vector machine, and perform experiments using a synthetic data set and three databases from the UCI Machine Learning Repository. The results show that our method is able to select the relevant features, increasing in some cases the classification accuracy when feature selection is performed. |
Year | DOI | Venue |
---|---|---|
2008 | 10.1007/s10044-008-0130-1 | Pattern Anal. Appl. |
Keywords | DocType | Volume |
feature selection,classification accuracy,global loss function,feature selectionbayesian learning � classification,uci machine learning repository,joint feature selection,classifier learning,loss function,boosted lasso algorithm,sparse bayesian approach,empirical loss,new method,non-parametrical classification method,synthetic data,relevance vector machine,machine learning,bayesian approach | Journal | 11 |
Issue | ISSN | Citations |
3-4 | 1433-755X | 4 |
PageRank | References | Authors |
0.44 | 20 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Àgata Lapedriza | 1 | 514 | 21.53 |
Santi Seguí | 2 | 85 | 9.11 |
David Masip | 3 | 143 | 16.50 |
Jordi Vitrià | 4 | 737 | 98.14 |