Title
Data discretization: taxonomy and big data challenge
Abstract
AbstractDiscretization of numerical data is one of the most influential data preprocessing tasks in knowledge discovery and data mining. The purpose of attribute discretization is to find concise data representations as categories which are adequate for the learning task retaining as much information in the original continuous attribute as possible. In this article, we present an updated overview of discretization techniques in conjunction with a complete taxonomy of the leading discretizers. Despite the great impact of discretization as data preprocessing technique, few elementary approaches have been developed in the literature for Big Data. The purpose of this article is twofold: a comprehensive taxonomy of discretization techniques to help the practitioners in the use of the algorithms is presented; the article aims is to demonstrate that standard discretization methods can be parallelized in Big Data platforms such as Apache Spark, boosting both performance and accuracy. We thus propose a distributed implementation of one of the most well-known discretizers based on Information Theory, obtaining better results than the one produced by: the entropy minimization discretizer proposed by Fayyad and Irani. Our scheme goes beyond a simple parallelization and it is intended to be the first to face the Big Data challenge. WIREs Data Mining Knowl Discov 2016, 6:5-21. doi: 10.1002/widm.1173
Year
DOI
Venue
2016
10.1002/widm.1173
Periodicals
Field
DocType
Volume
Information theory,Data mining,Discretization,Spark (mathematics),Computer science,Entropy minimization,Data pre-processing,Artificial intelligence,Boosting (machine learning),Knowledge extraction,Big data,Machine learning
Journal
6
Issue
ISSN
Citations 
1
1942-4787
22
PageRank 
References 
Authors
0.72
45
8