Abstract | ||
---|---|---|
In this paper we propose a Bayesian method for estimating architectural parameters of neural networks, namely layer size and network depth. We do this by learning concrete distributions over these parameters. Our results show that regular networks with a learned structure can generalise better on small datasets, while fully stochastic networks can be more robust to parameter initialisation. The proposed method relies on standard neural variational learning and, unlike randomised architecture search, does not require a retraining of the model, thus keeping the computational overhead at minimum. |
Year | Venue | Field |
---|---|---|
2019 | international conference on artificial intelligence and statistics | Overhead (computing),Architecture,Bayesian inference,Artificial intelligence,Artificial neural network,Retraining,Mathematics,Machine learning,Bayesian probability |
DocType | Volume | Citations |
Journal | abs/1901.04436 | 0 |
PageRank | References | Authors |
0.34 | 0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Georgi Dikov | 1 | 0 | 0.34 |
Patrick van der Smagt | 2 | 188 | 24.23 |
Justin Bayer | 3 | 157 | 32.38 |