Title | ||
---|---|---|
Direct Estimation Of Weights And Efficient Training Of Deep Neural Networks Without Sgd |
Abstract | ||
---|---|---|
We argue that learning a hierarchy of features in a hierarchical dataset requires lower layers to approach convergence faster than layers above them. We show that, if this assumption holds, we can analytically approximate the outcome of stochastic gradient descent (SGD) for each layer. We find that the weights should converge to a class-based PCA, with some weights in every layer dedicated to principal components of each label class. The class-based PCA allows us to train layers directly, without SGD, often leading to a dramatic decrease in training complexity. We demonstrate the effectiveness of this by using our results to replace one and two convolutional layers in networks trained on MNIST, CIFAR10 and CIFAR100 datasets, showing that our method achieves performance superior or comparable to similar architectures trained using SGD. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/icassp.2019.8682781 | 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) |
Field | DocType | ISSN |
Convergence (routing),Stochastic gradient descent,MNIST database,Pattern recognition,Computer science,Artificial intelligence,Hierarchy,Backpropagation,Artificial neural network,Principal component analysis,Deep neural networks | Conference | 1520-6149 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Nima Dehmamy | 1 | 4 | 2.07 |
Neda Rohani | 2 | 10 | 5.71 |
Aggelos K. Katsaggelos | 3 | 3410 | 340.41 |