Title
Training Strategies for Convolutional Neural Networks with Transformed Input
Abstract
Convolutional Neural Network (CNN) are now considered the main tool for image classification. However, most networks studied for classification are large and have extensive computing and storage requirements. Their training time is also usually very long. Such costly computational and storage requirements cannot be met in many applications with simple devices such as small processors or Internet of things (IoTs) devices. Therefore, reducing the size of networks and input sizes become necessary. However, such reductions are not easy and may reduce the classification performance. We examine how domain transforms under different training strategies can be used for efficient size reduction and improvement of classification accuracy. In this paper, we consider networks with under 220K learnable parameters, as opposed to millions in deeper networks. We show that by representing the input to a CNN using appropriately selected domain transforms, such as discrete wavelet transforms (DWT) or discrete cosine transform (DCT), it is possible to efficiently improve the performance of size-reduced networks. For example, DWT proves to be very effective when significant size reduction is needed (improving the result by up to 9%). It is also shown that by tuning training strategies such as the number of epochs and mini batch size, the performance can be further improved by up to 4% under fixed training time.
Year
DOI
Venue
2021
10.1109/MWSCAS47672.2021.9531913
2021 IEEE INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS)
Keywords
DocType
ISSN
image classification, Convolutional neural networks, DCT, DWT, domain transforms
Conference
1548-3746
Citations 
PageRank 
References 
0
0.34
0
Authors
2
Name
Order
Citations
PageRank
Masoumeh Kalantari Khandani111.40
Wasfy B. Mikhael27676.27