Title
VI-Net-View-Invariant Quality of Human Movement Assessment.
Abstract
We propose a view-invariant method towards the assessment of the quality of human movements which does not rely on skeleton data. Our end-to-end convolutional neural network consists of two stages, where at first a view-invariant trajectory descriptor for each body joint is generated from RGB images, and then the collection of trajectories for all joints are processed by an adapted, pre-trained 2D convolutional neural network (CNN) (e.g., VGG-19 or ResNeXt-50) to learn the relationship amongst the different body parts and deliver a score for the movement quality. We release the only publicly-available, multi-view, non-skeleton, non-mocap, rehabilitation movement dataset (QMAR), and provide results for both cross-subject and cross-view scenarios on this dataset. We show that VI-Net achieves average rank correlation of 0.66 on cross-subject and 0.65 on unseen views when trained on only two views. We also evaluate the proposed method on the single-view rehabilitation dataset KIMORE and obtain 0.66 rank correlation against a baseline of 0.62.
Year
DOI
Venue
2020
10.3390/s20185258
SENSORS
Keywords
DocType
Volume
movement analysis,view-invariant convolutional neural network (CNN),health monitoring
Journal
20
Issue
ISSN
Citations 
18
1424-8220
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Faegheh Sardari100.68
Adeline Paiement2617.88
Sion L. Hannuna36910.37
Majid Mirmehdi495596.94