Abstract | ||
---|---|---|
Recent advances in neural architecture search (NAS) demand tremendous computational resources. This makes it difficult to reproduce experiments and imposes a barrier-to-entry to researchers without access to large-scale computation. We aim to ameliorate these problems by introducing NAS-Bench-101, the first public architecture dataset for NAS research. To build NAS-Bench-101, we carefully constructed a compact, yet expressive, search space, exploiting graph isomorphisms to identify 423k unique convolutional architectures. We trained and evaluated all of these architectures multiple times on CIFAR-10 and compiled the results into a large dataset. All together, NAS-Bench-101 contains the metrics of over 5 million models, the largest dataset of its kind thus far. This allows researchers to evaluate the quality of a diverse range of models in milliseconds by querying the pre-computed dataset. We demonstrate its utility by analyzing the dataset as a whole and by benchmarking a range of architecture optimization algorithms. |
Year | Venue | Field |
---|---|---|
2019 | arXiv: Learning | Architecture,Computer architecture,Computer science,Artificial intelligence,Machine learning |
DocType | Volume | Citations |
Journal | abs/1902.09635 | 3 |
PageRank | References | Authors |
0.36 | 26 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Chris Ying | 1 | 3 | 0.36 |
Aaron Klein | 2 | 3 | 0.36 |
Esteban Real | 3 | 314 | 12.16 |
Eric M. Christiansen | 4 | 64 | 4.61 |
Michael Kuperberg | 5 | 7589 | 529.66 |
Frank Hutter | 6 | 2610 | 127.14 |