Abstract | ||
---|---|---|
This paper presents an automatic translation system of gestures of the manual alphabets in the Arabic sign language. The proposed Arabic Sign Language Alphabets Translator (ArSLAT) system does not rely on using any gloves or visual markings to accomplish the recognition job. As an alternative, it deals with images of bare hands, which allows the user to interact with the system in a natural way. The proposed system consists of five main phases; pre-processing phase, best-frame detection phase, category detection phase, feature extraction phase, and classification phase. The extracted features used are translation, scale, and rotation invariant, which make the system more flexible. Experiments revealed that the proposed ArSLAT system was able to recognize the 30 Arabic alphabets with an accuracy of 91.3%. |
Year | DOI | Venue |
---|---|---|
2010 | 10.1109/CISIM.2010.5643519 | Computer Information Systems and Industrial Management Applications |
Field | DocType | ISBN |
Object detection,Language translation,Feature detection (computer vision),Gesture,Computer science,Feature (computer vision),Gesture recognition,Feature extraction,Speech recognition,Artificial intelligence,Natural language processing,Contextual image classification | Conference | 978-1-4244-7817-0 |
Citations | PageRank | References |
14 | 1.22 | 12 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Nashwa El-Bendary | 1 | 119 | 24.37 |
Hossam M. Zawbaa | 2 | 27 | 3.31 |
Mahmoud S. Daoud | 3 | 14 | 1.22 |
ella Hassanien, A. | 4 | 36 | 4.57 |
Kazumi Nakamatsu | 5 | 168 | 35.95 |