Abstract | ||
---|---|---|
Vision-based action recognition encounters different challenges in practice, including recognition of the subject from any viewpoint, processing of data in real time, and offering privacy in a real-world setting. Even recognizing profile-based human actions, a subset of vision-based action recognition, is a considerable challenge in computer vision which forms the basis for an understanding of complex actions, activities, and behaviors, especially in healthcare applications and video surveillance systems. Accordingly, we introduce a novel method to construct a layer feature model for a profile-based solution that allows the fusion of features for multiview depth images, This model enables recognition from several viewpoints with low complexity at a real-time running speed of 63 fps for four profile-based actions: standing/walking, sitting, stooping, and lying. The experiment using the Northwestern-UCLA 3D dataset resulted in an average precision of 86.40%. With the i3DPost dataset, the experiment achieved an average precision of 93.00%. With the PSU multiview profile-based action dataset, a new dataset for multiple viewpoints which provides profile-based action RGBD images built by our group, we achieved an average precision of 99.31%. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1155/2018/9032945 | COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE |
Field | DocType | Volume |
Pattern recognition,Computer science,Viewpoints,Lying,Action recognition,Feature model,Artificial intelligence | Journal | 2018 |
ISSN | Citations | PageRank |
1687-5265 | 0 | 0.34 |
References | Authors | |
34 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Pongsagorn Chalearnnetkul | 1 | 0 | 0.34 |
Nikom Suvonvorn | 2 | 19 | 3.50 |