Title
Assessing User Bias in Affect Detection within Context-Based Spoken Dialog Systems
Abstract
This paper presents an empirical evidence of user bias within a laboratory-oriented evaluation of a Spoken Dialog System. Specifically, we addressed user bias in their satisfaction judgements. We question the reliability of this data for modeling user emotion, focusing on contentment and frustration in a spoken dialog system. This bias is detected through machine learning experiments that were conducted on two datasets, users and annotators, which were then compared in order to assess the reliability of these datasets. The target used was the satisfaction rating and the predictors were conversational/dialog features. Our results indicated that standard classifiers were significantly more successful in discriminating frustration and contentment and the intensities of these emotions (reflected by user satisfaction ratings) from annotator data than from user data. Indirectly, the results showed that conversational features are reliable predictors of the two abovementioned emotions.
Year
DOI
Venue
2012
10.1109/SocialCom-PASSAT.2012.112
SocialCom/PASSAT
Keywords
Field
DocType
satisfaction judgement,dialog feature,conversational feature,context-based spoken dialog systems,annotator data,user data,user satisfaction rating,user emotion,dialog system,assessing user bias,user bias,affect detection,satisfaction rating,human factors,learning artificial intelligence,psychology,frustration
Dialog box,Spoken dialog systems,Contentment,Spoken dialog,Empirical evidence,Context based,Emotion recognition,Computer science,Natural language processing,Artificial intelligence
Conference
ISBN
Citations 
PageRank 
978-1-4673-5638-1
0
0.34
References 
Authors
0
8