Title
Modeling Dynamics of Task and Social Cohesion from the Group Perspective Using Nonverbal Motion Capture-based Features
Abstract
ABSTRACTGroup cohesion is a multidimensional emergent state that manifests during group interaction. It has been extensively studied in several disciplines such as Social Sciences and Computer Science and it has been investigated through both verbal and nonverbal communication. This work investigates the dynamics of task and social dimensions of cohesion through nonverbal motion-capture-based features. We modeled dynamics either as decreasing or as stable/increasing regarding the previous measurement of cohesion. We design and develop a set of features related to space and body movement from motion capture data as it offers reliable and accurate measurements of body motions. Then, we use a random forest model to binary classify (decrease or no decrease) the dynamics of cohesion, for the task and social dimensions. Our model adopts labels from self-assessments of group cohesion, providing a different perspective of study with respect to the previous work relying on third-party labelling. The analysis reveals that, in a multilabel setting, our model is able to predict changes in task and social cohesion with an average accuracy of 64%(±3%) and 67%(±3%), respectively, outperforming random guessing (50%). In a multiclass setting comprised of four classes (i.e., decrease/decrease, decrease/no decrease, no decrease/decrease and no decrease/no decrease), our model also outperforms chance level (25%) for each class (i.e., 54%, 44%, 33%, 50%, respectively). Furthermore, this work provides a method based on notions from cooperative game theory (i.e., SHAP values) to assess features' impact and importance. We identify that the most important features for predicting cohesion dynamics relate to spacial distance, the amount of movement while walking, the overall posture expansion as well as the amount of inter-personal facing in the group.
Year
DOI
Venue
2020
10.1145/3395035.3425963
Multimodal Interfaces and Machine Learning for Multimodal Interaction
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Fabian Walocha100.34
Lucien Maman200.34
Mohamed Chetouani311.38
Giovanna Varni417026.42