Title
A Boo(n) for Evaluating Architecture Performance.
Abstract
We point out several important problems with the common practice of using the best single model performance for comparing Deep Learning architectures, and we propose a method that corrects these flaws. Each time a model is trained, one gets a different result due to random factors in the training process, which include random parameter initialization and random data shuffling. Reporting the best single model performance does not appropriately deal with this stochasticity. Furthermore, the expected best result increases with the number of experiments run, among other problems. We propose a normalized expected best-out-of-n performance (Boo_n) as a way to correct these problems.
Year
Venue
DocType
2018
international conference on machine learning
Journal
Volume
ISSN
Citations 
abs/1807.01961
Proceedings of the 35th International Conference on Machine Learning (ICML 2018). Volume 80 of the Proceedings of Machine Learning Research (PMLR)
0
PageRank 
References 
Authors
0.34
11
3
Name
Order
Citations
PageRank
Ondrej Bajgar11105.45
Rudolf Kadlec222916.25
Jan Kleindienst322023.74