Abstract | ||
---|---|---|
We propose a supervised learning algorithm whose aim is to derive features that explain the response variable better than the original features. Moreover, when there is a meaning for positive vs negative samples, our aim is to derive features that explain the positive samples, or subsets of positive samples that have the same root-cause. Each derived feature represents a single or multi-dimensional subspace of the feature space, where each dimension is specified as a feature-range pair for numeric features, and as a feature-level pair for categorical features. Unlike most Rule Learning and Subgroup Discovery algorithms, the response variable can be numeric, and our algorithm does not require a discretization of the response. The algorithm has been applied successfully to numerous real-life root-causing tasks in chip design, manufacturing, and validation, at Intel. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/DSAA.2019.00045 | 2019 IEEE International Conference on Data Science and Advanced Analytics (DSAA) |
Keywords | Field | DocType |
feature selection,rule learning,subgroup discovery,range analysis | Discretization,Feature vector,Subspace topology,Feature selection,Pattern recognition,Categorical variable,Computer science,Integrated circuit design,Artificial intelligence,Supervised training,Range analysis | Conference |
ISSN | ISBN | Citations |
2472-1573 | 978-1-7281-4494-8 | 0 |
PageRank | References | Authors |
0.34 | 8 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zurab Khasidashvili | 1 | 307 | 25.40 |
Adam J. Norman | 2 | 0 | 0.68 |