Title
Event-Based American Sign Language Recognition Using Dynamic Vision Sensor
Abstract
American Sign language (ASL) is one of the most effective communication tools for people with hearing difficulties. However, most of people do not understand ASL. To bridge this gap, we propose EV-ASL, an automatic ASL interpretation system based on dynamic vision sensor (DVS). Compared to the traditional RGB-based approach, DVS consumes significantly less resources (energy, computation, bandwidth) and it outputs the moving objects only without the need of background subtraction due to its event-based nature. At last, because of its wide dynamic response range, it enables the EV-ASL to work under a variety of lighting conditions. EV-ASL proposes novel representation of event streams and facilitates deep convolutional neural network for sign recognition. In order to evaluate the performance of EV-ASL, we recruited 10 participants and collected 11,200 samples from 56 different ASL words. The evaluation shows that EV-ASL achieves a recognition accuracy of 93.25%.
Year
DOI
Venue
2021
10.1007/978-3-030-86137-7_1
WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT III
Keywords
DocType
Volume
American Sign Language, Convolutional neural networks, Dynamic vision sensor
Conference
12939
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Yong Wang100.68
Xian Zhang210.69
Yanxiang Wang341.39
HongBin Wang401.01
Chanying Huang501.35
Yiran Shen612415.80