Title
Prediction of Interlocutors’ Subjective Impressions Based on Functional Head-Movement Features in Group Meetings
Abstract
ABSTRACT A novel model is proposed to predict interlocutors’ subjective impressions during group meetings based on their head movements. The goal is to explore the potential of the communicative functions performed through head movements. To this end, this paper focuses on ten frequent functions, such as speaker emphasis and listener back-channel responses, which are detected using convolutional neural networks (CNNs) from head-pose sequences. Regarding the prediction target, this study employs four items of subjective impressions—atmosphere, enjoyment, willingness, and concentration—which are scored at every two-minute interval by the interlocutors themselves in four-party meetings with 17 groups. From the detected head-movement functions, this paper newly defines the functional features, including function occurrence rates and composition ratios, in addition to the kinetic features representing head-movement activity. Using these features as the input, random forest regressors predict the impression scores. Compared with the baseline model using only kinetic features, the prediction model using both kinetic and functional features improved prediction performance. The percentage of moderate or higher correlations (≥ 0.5) between reported scores and predictions increased from 25% to 41% in all groups and all items. These results suggest that head-movement functions could be effective cues for predicting interlocutors’ subjective impressions.
Year
DOI
Venue
2021
10.1145/3462244.3479930
Multimodal Interfaces and Machine Learning for Multimodal Interaction
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Shumpei Otsuchi100.34
Yoko Ishii200.34
Momoko Nakatani300.34
Kazuhiro Otsuka461954.15