Abstract | ||
---|---|---|
In this paper we present the analog on-chip learning architecture of a gradient descent learning algorithm: the Weight Perturbation learning algorithm. From the circuit implementation point of view, our approach is based on current mode and trans-linear operated circuits. The proposed architecture is very efficient in terms of speed, size, precision and power consumption; moreover, it exhibits also high scalability and modularity. |
Year | DOI | Venue |
---|---|---|
2000 | 10.1109/IJCNN.2000.860776 | Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference |
Keywords | Field | DocType |
VLSI,analogue integrated circuits,current-mode circuits,gradient methods,learning (artificial intelligence),neural chips,neural net architecture,VLSI architecture,analog on-chip learning architecture,current mode circuits,gradient descent learning algorithm,modularity,neural net,scalability,translinear operated circuits,weight perturbation | Gradient descent,Architecture,Computer science,Artificial intelligence,Electronic circuit,Very-large-scale integration,Perturbation (astronomy),Modularity,Machine learning,Vlsi architecture,Scalability | Conference |
Volume | ISSN | ISBN |
4 | 1098-7576 | 0-7695-0619-4 |
Citations | PageRank | References |
2 | 0.54 | 4 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Francesco Diotalevi | 1 | 17 | 4.17 |
M. Valle | 2 | 97 | 19.19 |
G. M. Bo | 3 | 27 | 6.03 |
Caviglia, D.D. | 4 | 22 | 4.64 |