Abstract | ||
---|---|---|
AbstractGiven a classifier ensemble and a dataset, many examples may be confidently and accurately classified after only a subset of the base models in the ensemble is evaluated. Dynamically deciding to classify early can reduce both mean latency and CPU without harming the accuracy of the original ensemble. To achieve such gains, we propose jointly optimizing the evaluation order of the base models and early-stopping thresholds. Our proposed objective is a combinatorial optimization problem, but we provide a greedy algorithm that achieves a 4-approximation of the optimal solution under certain assumptions, which is also the best achievable polynomial-time approximation bound. Experiments on benchmark and real-world problems show that the proposed Quit When You Can (QWYC) algorithm can speed up average evaluation time by 1.8–2.7 times on even jointly trained ensembles, which are more difficult to speed up than independently or sequentially trained ensembles. QWYC’s joint optimization of ordering and thresholds also performed better in experiments than previous fixed orderings, including gradient boosted trees’ ordering. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1145/3451209 | ACM Journal on Emerging Technologies in Computing Systems |
Keywords | Field | DocType |
Efficient ensemble evaluation, ensemble learning, combinatorial optimization, gradient boosting | Mathematical optimization,Combinatorial optimization problem,Latency (engineering),Greedy algorithm,Classifier (linguistics),Time complexity,Mathematics | Journal |
Volume | Issue | ISSN |
17 | 4 | 1550-4832 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Serena Wang | 1 | 4 | 4.09 |
Maya R. Gupta | 2 | 5 | 1.12 |
Seungil You | 3 | 39 | 6.79 |