Title
Deep Neural Network Hardware Implementation Based on Stacked Sparse Autoencoder.
Abstract
Deep learning techniques have been gaining prominence in the research world in the past years; however, the deep learning algorithms have high computational cost, making them hard to be used to several commercial applications. On the other hand, new alternatives have been studied and some methodologies focusing on accelerating complex algorithms including those based on reconfigurable hardware has been showing significant results. Therefore, the objective of this paper is to propose a neural network hardware implementation to be used in deep learning applications. The implementation was developed on a field-programmable gate array (FPGA) and supports deep neural network (DNN) trained with the stacked sparse autoencoder (SSAE) technique. In order to allow DNNs with several inputs and layers on the FPGA, the systolic array technique was used in the entire architecture. Details regarding the designed implementation were evidenced, as well as the hardware area occupation and the processing time for two different implementations. The results showed that both implementations achieved high throughput enabling deep learning techniques to be applied for problems with large data amounts.
Year
DOI
Venue
2019
10.1109/ACCESS.2019.2907261
IEEE ACCESS
Keywords
Field
DocType
Deep learning,stacked sparse autoencoder,FPGA,systolic array
Computer architecture,Autoencoder,Computer science,Systolic array,Field-programmable gate array,Gate array,Artificial intelligence,Throughput,Deep learning,Artificial neural network,Distributed computing,Reconfigurable computing
Journal
Volume
ISSN
Citations 
7
2169-3536
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Maria G. F. Coutinho100.34
Matheus F. Torquato222.74
M. A.C. Fernandes3158.23