Title | ||
---|---|---|
On NoMatchs, NoInputs and BargeIns: do non-acoustic features support anger detection? |
Abstract | ||
---|---|---|
Most studies on speech-based emotion recognition are based on prosodic and acoustic features, only employing artificial acted corpora where the results cannot be generalized to telephone-based speech applications. In contrast, we present an approach based on utterances from 1,911 calls from a deployed telephone-based speech application, taking advantage of additional dialogue features, NLU features and ASR features that are incorporated into the emotion recognition process. Depending on the task, non-acoustic features add 2.3% in classification accuracy compared to using only acoustic features. |
Year | Venue | Keywords |
---|---|---|
2009 | SIGDIAL Conference | non-acoustic feature,additional dialogue feature,asr feature,nlu feature,classification accuracy,telephone-based speech application,emotion recognition process,acoustic feature,non-acoustic features support anger,speech-based emotion recognition |
Field | DocType | Citations |
Emotion recognition,Computer science,Speech applications,Speech recognition,Anger,Natural language processing,Artificial intelligence | Conference | 9 |
PageRank | References | Authors |
0.87 | 6 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Alexander Schmitt | 1 | 53 | 5.33 |
Tobias Heinroth | 2 | 56 | 9.56 |
Jackson Liscombe | 3 | 169 | 19.13 |