Title
DS-P3SNet: An Efficient Classification Approach for Devanagari Script-Based P300 Speller Using Compact Channelwise Convolution and Knowledge Distillation
Abstract
Deep convolution neural network and its ensemble variant-based classification methods of P300 in the Devanagari script (DS)-based P300 speller (DS-P3S) have generated numerous training parameters. This is likely to increase the problems like computational complexity and overfitting. The recent attempts of researchers to overcome these problems are further deteriorating the accuracy due to the dense connectivity and channel-mix group convolution. Moreover, compressing the deep models in these attempts also found losing vital information. Therefore, to mitigate these problems, an efficient compact classification model called “DS-P3SNet” along with a knowledge distillation (KD) and transfer learning (TL) is proposed in this article. It includes: 1) extraction of rich morphological information across temporal region; 2) combination of channelwise and channel-mix-depthwise convolution (C2-DwCN) for efficient channel selection and extraction of spatial information with less number of trainable parameters; 3) channelwise convolution (Cw-CN) for classification to provide sparse connectivity; 4) knowledge distillation to reduce the tradeoff between accuracy and the number of trainable parameters; and 5) subject-to-subject transfer of learning to reduce subject variability. The trial-to-trial transfer of learning to reduce the tradeoff between the number of trials and accuracy. The experimentations were performed on a self-generated dataset of 20 words comprising of 79 DS characters collected from ten volunteer healthy subjects. An average accuracy of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$95.32~{\pm }~0.85$ </tex-math></inline-formula> % and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$94.64~{\pm }~0.68$ </tex-math></inline-formula> % were obtained for subject-dependent and subject-independent experiments, respectively. The trainable parameters were also reduced approximately by 2–34 times compared to existing models with improved or equivalent performance.
Year
DOI
Venue
2022
10.1109/TSMC.2022.3156861
IEEE Transactions on Systems, Man, and Cybernetics: Systems
Keywords
DocType
Volume
Compact convolution neural network (CNN),Devanagari script (DS),knowledge distillation (KD),P300 speller,transfer learning (TL)
Journal
52
Issue
ISSN
Citations 
12
2168-2216
0
PageRank 
References 
Authors
0.34
21
2
Name
Order
Citations
PageRank
Ghanahshyam B. Kshirsagar161.42
Narendra D. Londhe29813.85