Title
NAS-Bench-101: Towards Reproducible Neural Architecture Search.
Abstract
Recent advances in neural architecture search (NAS) demand tremendous computational resources. This makes it difficult to reproduce experiments and imposes a barrier-to-entry to researchers without access to large-scale computation. We aim to ameliorate these problems by introducing NAS-Bench-101, the first public architecture dataset for NAS research. To build NAS-Bench-101, we carefully constructed a compact, yet expressive, search space, exploiting graph isomorphisms to identify 423k unique convolutional architectures. We trained and evaluated all of these architectures multiple times on CIFAR-10 and compiled the results into a large dataset. All together, NAS-Bench-101 contains the metrics of over 5 million models, the largest dataset of its kind thus far. This allows researchers to evaluate the quality of a diverse range of models in milliseconds by querying the pre-computed dataset. We demonstrate its utility by analyzing the dataset as a whole and by benchmarking a range of architecture optimization algorithms.
Year
Venue
Field
2019
arXiv: Learning
Architecture,Computer architecture,Computer science,Artificial intelligence,Machine learning
DocType
Volume
Citations 
Journal
abs/1902.09635
3
PageRank 
References 
Authors
0.36
26
6
Name
Order
Citations
PageRank
Chris Ying130.36
Aaron Klein230.36
Esteban Real331412.16
Eric M. Christiansen4644.61
Michael Kuperberg57589529.66
Frank Hutter62610127.14