Title
To Join or Not to Join?: Thinking Twice about Joins before Feature Selection
Abstract
ABSTRACTCloser integration of machine learning (ML) with data processing is a booming area in both the data management industry and academia. Almost all ML toolkits assume that the input is a single table, but many datasets are not stored as single tables due to normalization. Thus, analysts often perform key-foreign key joins to obtain features from all base tables and apply a feature selection method, either explicitly or implicitly, with the aim of improving accuracy. In this work, we show that the features brought in by such joins can often be ignored without affecting ML accuracy significantly, i.e., we can "avoid joins safely." We identify the core technical issue that could cause accuracy to decrease in some cases and analyze this issue theoretically. Using simulations, we validate our analysis and measure the effects of various properties of normalized data on accuracy. We apply our analysis to design easy-to-understand decision rules to predict when it is safe to avoid joins in order to help analysts exploit this runtime-accuracy trade-off. Experiments with multiple real normalized datasets show that our rules are able to accurately predict when joins can be avoided safely, and in some cases, this led to significant reductions in the runtime of some popular feature selection methods.
Year
DOI
Venue
2016
10.1145/2882903.2882952
MOD
Field
DocType
Citations 
Decision rule,Data mining,Data processing,Joins,Normalization (statistics),Feature selection,Computer science,Exploit,Feature engineering,Data management,Database
Conference
21
PageRank 
References 
Authors
0.72
24
4
Name
Order
Citations
PageRank
Arun Kumar145136.43
Jeffrey F. Naughton283631913.71
Jignesh M. Patel34406288.44
Xiaojin Zhu43586222.74