Title | ||
---|---|---|
On The Use Of Multimodal Cues For The Prediction Of Degrees Of Involvement In Spontaneous Conversation |
Abstract | ||
---|---|---|
Quantifying the degree of involvement of a group of participants in a conversation is a task which humans accomplish every day, but it is something that, as of yet, machines are unable to do. In this study we first investigate the correlation between visual cues (gaze and blinking rate) and involvement. We then test the suitability of prosodic cues (acoustic model) as well as gaze and blinking (visual model) for the prediction of the degree of involvement by using a support vector machine (SVM). We also test whether the fusion of the acoustic and the visual model improves the prediction. We show that we are able to predict three classes of involvement with an reduction of error rate of 0.30 (accuracy =0.68). |
Year | Venue | Keywords |
---|---|---|
2011 | 12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5 | involvement, multimodality, spontaneous speech, blinking, gaze |
Field | DocType | Citations |
Sensory cue,Conversation,Gaze,Pattern recognition,Computer science,Word error rate,Support vector machine,Speech recognition,Correlation,Artificial intelligence,Acoustic model | Conference | 4 |
PageRank | References | Authors |
0.51 | 5 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Catharine Oertel | 1 | 77 | 9.46 |
Stefan Scherer | 2 | 1159 | 73.43 |
Nick Campbell | 3 | 46 | 4.72 |