Abstract | ||
---|---|---|
This study addresses the problem of vision-based sign language recognition, which is to translate signs to English. The authors propose a fully automatic system that starts with breaking up signs into manageable subunits. A variety of spatiotemporal descriptors are extracted to form a feature vector for each subunit. Based on the obtained features, subunits are clustered to yield codebooks. A boosting algorithm is then applied to learn a subset of weak classifiers representing discriminative combinations of features and subunits, and to combine them into a strong classifier for each sign. A joint learning strategy is also adopted to share subunits across sign classes, which leads to a more efficient classification. Experimental results on real-world hand gesture videos demonstrate the proposed approach is promising to build an effective and scalable system. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1049/iet-ipr.2012.0273 | IET Image Processing |
Keywords | Field | DocType |
video signal processing,pattern clustering,feature vector,learning (artificial intelligence),spatiotemporal descriptors,english,boosting algorithm,vision-based sign language recognition framework,joint learning strategy,feature extraction,image classification,natural language processing,hearing-impaired people,boosted subunits,handicapped aids,sign language recognition,vectors,weak classifiers,sign translation,learning artificial intelligence | Feature vector,Pattern recognition,Gesture,Computer science,Speech recognition,Sign language,Artificial intelligence,Boosting (machine learning),Scalable system,Classifier (linguistics),Discriminative model | Journal |
Volume | Issue | ISSN |
7 | 1 | 1751-9659 |
Citations | PageRank | References |
1 | 0.35 | 6 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Junwei Han | 1 | 3501 | 194.57 |
George Awad | 2 | 362 | 29.64 |
Alistair Sutherland | 3 | 101 | 14.36 |