Title
Quantization-aware training for low precision photonic neural networks
Abstract
Recent advances in Deep Learning (DL) fueled the interest in developing neuromorphic hardware accelerators that can improve the computational speed and energy efficiency of existing accelerators. Among the most promising research directions towards this is photonic neuromorphic architectures, which can achieve femtojoule per MAC efficiencies. Despite the benefits that arise from the use of neuromorphic architectures, a significant bottleneck is the use of expensive high-speed and precision analog-to-digital (ADCs) and digital-to-analog conversion modules (DACs) required to transfer the electrical signals, originating from the various Artificial Neural Networks (ANNs) operations (inputs, weights, etc.) in the photonic optical engines. The main contribution of this paper is to study quantization phenomena in photonic models, induced by DACs/ADCs, as an additional noise/uncertainty source and to provide a photonics-compliant framework for training photonic DL models with limited precision, allowing for reducing the need for expensive high precision DACs/ADCs. The effectiveness of the proposed method is demonstrated using different architectures, ranging from fully connected and convolutional networks to recurrent architectures, following recent advances in photonic DL.
Year
DOI
Venue
2022
10.1016/j.neunet.2022.09.015
Neural Networks
Keywords
DocType
Volume
Photonic deep learning,Neural network quantization,Constrained-aware training
Journal
155
ISSN
Citations 
PageRank 
0893-6080
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
M. Kirtas101.01
A Oikonomou200.34
N. Passalis311733.70
G Mourgias-Alexandris400.34
M Moralis-Pegios500.34
Nikos Pleros62523.69
Anastasios Tefas72055177.05