Title
Emotion Recognition Using Synthetic Speech As Neutral Reference
Abstract
A common approach to recognize emotion from speech is to estimate multiple acoustic features at sentence or turn level. These features are derived independent of the underlying lexical content. Studies have demonstrated that lexical dependent models improve emotion recognition accuracy. However, current practical approaches can only model small lexical units like phonemes, syllables or few key words, which limits these systems. We believe that building longer lexical models (i.e., sentence level model) is feasible by leveraging the advances in speech synthesis. Assuming that the transcript of the target speech is available, we synthesize speech conveying the same lexical information. The synthetic speech is used as a neutral reference model to contrast different acoustic features, unveiling local emotional changes. This paper introduces this novel framework and provides insights on how to compare the target and synthetic speech signals. Our evaluations demonstrate the benefits of synthetic speech as neutral reference to incorporate lexical dependencies in emotion recognition. The experimental results show that adding features derived from contrasting expressive speech with the proposed synthetic speech reference increases the accuracy in 2.1% and 2.8% (absolute) in classifying low versus high levels of arousal and valence, respectively.
Year
Venue
Keywords
2015
2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP)
emotion detection, synthetic speech, speech rate, speech alignment
Field
DocType
ISSN
Speech corpus,Speech processing,Speech synthesis,Audio mining,Voice activity detection,Computer science,Speech recognition,TRACE (psycholinguistics),Natural language processing,Artificial intelligence,Speech segmentation,Acoustic model
Conference
1520-6149
Citations 
PageRank 
References 
6
0.44
20
Authors
2
Name
Order
Citations
PageRank
Reza Lotfian1302.65
Carlos Busso2161693.04