Abstract | ||
---|---|---|
In this article, we describe the design choices behind MLPerf, a machine learning performance benchmark that has become an industry standard. The first two rounds of the MLPerf Training benchmark helped drive improvements to software-stack performance and scalability, showing a 1.3× speedup in the top 16-chip results despite higher quality targets and a 5.5× increase in system scale. The first rou... |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/MM.2020.2974843 | IEEE Micro |
Keywords | DocType | Volume |
Benchmark testing,Training,Machine learning,Measurement,Computational modeling,Numerical models | Journal | 40 |
Issue | ISSN | Citations |
2 | 0272-1732 | 8 |
PageRank | References | Authors |
0.54 | 0 | 12 |
Name | Order | Citations | PageRank |
---|---|---|---|
Peter Mattson | 1 | 8 | 1.56 |
Hanlin Tang | 2 | 29 | 5.46 |
Gu-Yeon Wei | 3 | 1927 | 214.15 |
Carole-Jean Wu | 4 | 8 | 0.54 |
Vijay Janapa Reddi | 5 | 8 | 0.54 |
Christine Cheng | 6 | 8 | 0.54 |
Cody Coleman | 7 | 8 | 0.54 |
Gregory Frederick Diamos | 8 | 1117 | 51.07 |
David Kanter | 9 | 8 | 0.54 |
Paulius Micikevicius | 10 | 9 | 1.59 |
David A. Patterson | 11 | 11093 | 1925.05 |
Guenther Schmuelling | 12 | 8 | 0.54 |