Abstract | ||
---|---|---|
Floating gate SONOS (Silicon-Oxygen-Nitrogen-Oxygen-Silicon) transistors can be used to train neural networks to ideal accuracies that match those of floating point digital weights on the MNIST dataset when using multiple devices to represent a weight or within 1% of ideal accuracy when using a single device. This is enabled by operating devices in the subthreshold regime, where they exhibit symmetric write nonlinearities. A neural training accelerator core based on SONOS with a single device per weight would increase energy efficiency by 120X, operate 2.1X faster and require 5X lower area than an optimized SRAM based ASIC. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/jxcdc.2019.2902409 | IEEE Journal on Exploratory Solid-State Computational Devices and Circuits |
DocType | Volume | Citations |
Journal | abs/1901.10570 | 0 |
PageRank | References | Authors |
0.34 | 0 | 9 |
Name | Order | Citations | PageRank |
---|---|---|---|
Sapan Agarwal | 1 | 13 | 4.07 |
Diana Garland | 2 | 0 | 0.34 |
John Niroula | 3 | 0 | 0.34 |
Robin Jacobs-Gedrim | 4 | 6 | 1.49 |
Alexander H. Hsia | 5 | 0 | 0.34 |
Michael S. van Heukelom | 6 | 0 | 0.34 |
Elliot Fuller | 7 | 0 | 0.34 |
Bruce Draper | 8 | 29 | 5.12 |
Matthew J. Marinella | 9 | 25 | 7.43 |