Title
Forest Packing: Fast, Parallel Decision Forests.
Abstract
Machine learning has an emerging critical role in high-performance computing to modulate simulations, extract knowledge from massive data, and replace numerical models with efficient approximations. Decision forests are a critical tool because they provide insight into model operation that is critical to interpreting learned results. While decision forests are trivially parallelizable, the traversals of tree data structures incur many random memory accesses and are very slow. We present memory packing techniques that reorganize learned forests to minimize cache misses during classification. The resulting layout is hierarchical. At low levels, we pack the nodes of multiple trees into contiguous memory blocks so that each memory access fetches data for multiple trees. At higher levels, we use leaf cardinality to identify the most popular paths through a tree and collocate those paths in cache lines. We extend this layout with out-of-order execution and cache-line prefetching to increase memory throughput. Together, these optimizations increase the performance of classification in ensembles by a factor of four over an optimized C++ implementation and a actor of 50 over a popular R language implementation.
Year
Venue
Field
2018
arXiv: Performance
Parallelizable manifold,Numerical models,Computer science,Cache,Parallel computing,Tree (data structure),Cardinality,Throughput,R language
DocType
Volume
Citations 
Journal
abs/1806.07300
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
James Browne100.34
Tyler M. Tomita200.68
Disa Mhembere3635.42
Randal C. Burns484.19
Joshua T. Vogelstein527331.99