Title
Kernel Extraction via Voted Risk Minimization
Abstract
This paper studies a new framework for learning a predictor in the presence of multiple kernel functions where the learner selects or extracts several kernel functions from potentially complex families and finds an accurate predictor defined in terms of these functions. We present an algorithm, Voted Kernel Regularization, that provides the flexibility of using very complex kernel functions such as predictors based on high-degree polynomial kernels or narrow Gaussian kernels, while benefitting from strong learning guarantees. We show that our algorithm benefits from strong learning guarantees suggesting a new regularization penalty depending on the Rademacher complexities of the families of kernel functions used. Our algorithm admits several other favorable properties: its optimization problem is convex, it allows for learning with non-PDS kernels, and the solutions are highly sparse, resulting in improved classification speed and memory requirements. We report the results of some preliminary experiments comparing the performance of our algorithm to several baselines.
Year
Venue
Field
2015
FE@NIPS
Kernel (linear algebra),Mathematical optimization,Polynomial,Computer science,Regular polygon,Gaussian,Minification,Regularization (mathematics),Optimization problem,Kernel (statistics)
DocType
Citations 
PageRank 
Conference
1
0.36
References 
Authors
13
4
Name
Order
Citations
PageRank
Corinna Cortes165741120.50
Goyal, Prasoon22489.94
Vitaly Kuznetsov3689.33
Mehryar Mohri44502448.21