Abstract | ||
---|---|---|
We propose to address the issue of sample efficiency, in Deep Convolutional Neural Networks (DCNN), with a semi-supervised training strategy that combines Hebbian learning with gradient descent: all internal layers (both convolutional and fully connected) are pre-trained using an unsupervised approach based on Hebbian learning, and the last fully connected layer (the classification layer) is trained using Stochastic Gradient Descent (SGD). In fact, as Hebbian learning is an unsupervised learning method, its potential lies in the possibility of training the internal layers of a DCNN without labels. Only the final fully connected layer has to be trained with labeled examples. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1016/j.neunet.2021.08.003 | Neural Networks |
Keywords | DocType | Volume |
Convolutional Neural Networks,Computer vision,Semi-supervised learning,Hebbian learning,Sample efficiency | Journal | 143 |
Issue | ISSN | Citations |
1 | 0893-6080 | 1 |
PageRank | References | Authors |
0.41 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Gabriele Lagani | 1 | 1 | 1.76 |
Fabrizio Falchi | 2 | 459 | 55.65 |
Claudio Gennaro | 3 | 490 | 57.23 |
Giuseppe Amato | 4 | 505 | 106.68 |