Title
COMET: A Recipe for Learning and Using Large Ensembles on Massive Data
Abstract
COMET is a single-pass MapReduce algorithm for learning on large-scale data. It builds multiple random forest ensembles on distributed blocks of data and merges them into a mega-ensemble. This approach is appropriate when learning from massive-scale data that is too large to fit on a single machine. To get the best accuracy, IVoting should be used instead of bagging to generate the training subset for each decision tree in the random forest. Experiments with two large datasets (5GB and 50GB compressed) show that COMET compares favorably (in both accuracy and training time) to learning on a sub sample of data using a serial algorithm. Finally, we propose a new Gaussian approach for lazy ensemble evaluation which dynamically decides how many ensemble members to evaluate per data point, this can reduce evaluation cost by 100X or more.
Year
DOI
Venue
2011
10.1109/ICDM.2011.39
international conference on data mining
Keywords
DocType
Volume
evaluation cost,massive data,new gaussian approach,massive-scale data,data point,large datasets,large ensembles,best accuracy,multiple random forest ensemble,large-scale data,ensemble member,lazy ensemble evaluation,distributed processing,random forest,gaussian processes,learning artificial intelligence,data handling,decision trees,cluster computing,decision tree
Conference
abs/1103.2068
ISSN
Citations 
PageRank 
ICDM 2011: Proceedings of the 2011 IEEE International Conference on Data Mining, pp. 41-50, 2011
12
0.66
References 
Authors
22
5
Name
Order
Citations
PageRank
Justin Basilico117914.28
M. Arthur Munson2281.81
Tamara G. Kolda35079262.60
Kevin R. Dixon4659.43
W. Philip Kegelmeyer53498146.54