Title
An Interpretable Regression Approach Based On Bi-Sparse Optimization
Abstract
Given the increasing amounts of data and high feature dimensionalities in forecasting problems, it is challenging to build regression models that are both computationally efficient and highly accurate. Moreover, regression models commonly suffer from low interpretability when using a single kernel function or a composite of multi-kernel functions to address nonlinear fitting problems. In this paper, we propose a bi-sparse optimization-based regression (BSOR) model and corresponding algorithm with reconstructed row and column kernel matrices in the framework of support vector regression (SVR). The BSOR model can predict continuous output values for given input points while using the zero-norm regularization method to achieve sparse instance and feature sets. Experiments were run on 16 datasets to compare BSOR to SVR, linear programming SVR (LPSVR), least squares SVR (LSSVR), multi-kernel learning SVR (MKLSVR), least absolute shrinkage and selection operator regression (LASSOR), and relevance vector regression (RVR). BSOR significantly outperformed the other six regression models in predictive accuracy, identification of the fewest representative instances, selection of the fewest important features, and interpretability of results, apart from its slightly high runtime.
Year
DOI
Venue
2020
10.1007/s10489-020-01687-3
APPLIED INTELLIGENCE
Keywords
DocType
Volume
Data mining, Multi-kernel learning, Sparse learning, Zero-norm regularization, Support vector regression
Journal
50
Issue
ISSN
Citations 
11
0924-669X
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Zhiwang Zhang18111.15
Guangxia Gao2655.56
Tao Yao3395.33
Jing He454.44
Ying-Jie Tian5186.34