Abstract | ||
---|---|---|
We propose an incremental training method that partitions the original network into sub-networks, which are then gradually incorporated in the running network during the training process. To allow for a smooth dynamic growth of the network, we introduce a look-ahead initialization that outperforms the random initialization. We demonstrate that our incremental approach reaches the reference network baseline accuracy. Additionally, it allows to identify smaller partitions of the original state-of-the-art network, that deliver the same final accuracy, by using only a fraction of the global number of parameters. This allows for a potential speedup of the training time of several factors. We report training results on CIFAR-10 for ResNet and VGGNet. |
Year | Venue | DocType |
---|---|---|
2018 | AutoML@PKDD/ECML | Journal |
Volume | ISSN | Citations |
abs/1803.10232 | http://ceur-ws.org/Vol-1998 | 4 |
PageRank | References | Authors |
0.38 | 8 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Roxana Istrate | 1 | 16 | 2.96 |
A Cristiano I Malossi | 2 | 65 | 9.29 |
Costas Bekas | 3 | 81 | 12.75 |
Dimitrios S. Nikolopoulos | 4 | 1469 | 128.40 |