Title
Weighted Feature Fusion Based Emotional Recognition For Variable-Length Speech Using Dnn
Abstract
Emotion recognition plays an increasingly important role in human-computer interaction systems, which is a key technology in multimedia communication. Because neural networks can automatically learn the intermediate representation of raw speech signal, currently, most methods use Convolutional Neural Network (CNN) to extract information directly from spectrograms, but this may result in the ineffective use of information in hand-crafted features. In this work, a model based on weighted feature fusion method is proposed for emotion recognition of variable-length speech. Since the Chroma-based features are closely related to speech emotions, our model can effectively utilize the useful information in Chromaticity map to improve the performance by combining CNN-based features and Chroma-based features. We evaluated the model on the Interactive Emotional Motion Capture (IEMOCAP) dataset and achieved more than 5% increase in weighted accuracy (WA) and unweighted accuracy (UA), comparing with the existing state-of-the-art methods.
Year
DOI
Venue
2019
10.1109/IWCMC.2019.8766646
2019 15TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE (IWCMC)
Keywords
Field
DocType
Speech Emotion Recognition, Bidirectional Long Short-Term Memory, Weighted Feature Fusion, Chroma Feature
Motion capture,Feature fusion,Interaction systems,Emotion recognition,Spectrogram,Convolutional neural network,Computer science,Speech recognition,Intermediate language,Artificial neural network,Distributed computing
Conference
ISSN
Citations 
PageRank 
2376-6492
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Sifan Wu100.34
Fei Li29739.93
Pengyuan Zhang35019.46