Title
Multi-modal social signal analysis for predicting agreement in conversation settings
Abstract
In this paper we present a non-invasive ambient intelligence framework for the analysis of non-verbal communication applied to conversational settings. In particular, we apply feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues coming from the fields of psychology and observational methodology. We test our methodology over data captured in victim-offender mediation scenarios. Using different state-of-the-art classification approaches, our system achieve upon 75% of recognition predicting agreement among the parts involved in the conversations, using as ground truth the experts opinions.
Year
DOI
Venue
2013
10.1145/2522848.2532594
ICMI
Keywords
Field
DocType
feature extraction technique,communicative cue,audio-rgb-depth data,non-verbal communication,ground truth,experts opinion,different state-of-the-art classification approach,multi-modal social signal analysis,observational methodology,non-invasive ambient intelligence framework,behavioral indicator,conversation setting,computer vision,machine learning,pattern recognition
Signal processing,Computer vision,Observational study,Conversation,Computer science,Ambient intelligence,Feature extraction,Ground truth,Artificial intelligence,Mediation (Marxist theory and media studies),Machine learning,Modal
Conference
Citations 
PageRank 
References 
5
0.41
15
Authors
3
Name
Order
Citations
PageRank
Víctor Ponce-López11327.10
Sergio Escalera21415113.31
Xavier Baró347433.99