Title
Fusion Networks for Air-Writing Recognition.
Abstract
This paper presents a fusion framework for air-writing recognition. By modeling a hand trajectory using both spatial and temporal features, the proposed network can learn more information than the state-of-the-art techniques. The proposed network combines elements of CNN and BLSTM networks to learn the isolated air-writing characters. The performance of proposed network was evaluated by the alphabet and numeric databases in the public dataset namely 6DMG. We first evaluate the accuracy of fusion network using CNN, BLSTM, and another fusion network as the references. The results confirmed that the average accuracy of fusion network outperforms all of the references. When the BLSTM unit was set at 40, the best accuracy of proposed network is 99.27% and 99.33% in the alphabet and numeric gesture, respectively. When compared this result with another work, the accuracy of proposed network improves 0.70% and 0.34% in the alphabet and numeric gesture, respectively. We also examine the performance of the proposed network by varying the number of BLSTM units. The experiments demonstrate that while increasing the number of BLSTM units, the accuracy also improves. When the BLSTM unit is greater than 20, the accuracy maintains even though the BLSTM unit increases. Despite adding more learning features, the accuracy of proposed network insignificantly improves.
Year
DOI
Venue
2018
10.1007/978-3-319-73600-6_13
Lecture Notes in Computer Science
Keywords
Field
DocType
Air-writing recognition,Human machine interface,Gesture recognition,Convolutional neural network,BLSTM
Pattern recognition,Computer science,Convolutional neural network,Gesture,Gesture recognition,Fusion,Human–machine interface,Artificial intelligence,Trajectory,Alphabet
Conference
Volume
ISSN
Citations 
10705
0302-9743
2
PageRank 
References 
Authors
0.38
7
2
Name
Order
Citations
PageRank
Buntueng Yana120.72
T. Onoye23710.36