Title
Exploring Siamese Neural Network Architectures for Preserving Speaker Identity in Speech Emotion Classification.
Abstract
Voice-enabled communication is increasingly being used in real-world applications, such as the ones involving conversational bots or "chatbots". Chatbots can spark and sustain user engagement by effectively recognizing their emotions and acting upon them. However, the majority of emotion recognition systems rely on rich spectrotemporal acoustic features. Beyond the emotion-related information, such systems tend to preserve information relevant to the identity of the speaker, therefore raising major privacy concerns from the users. This paper introduces two hybrid architectures for privacy-preserving emotion recognition from speech. These architectures rely on a Siamese neural network, whose input and intermediate layers are transformed using various privacy-performing operations in order to retain emotion-dependent content and suppress information related to the identity of a speaker. The proposed approach is evaluated through emotion classification and speaker identification performance metrics. Results indicate that the proposed framework can achieve up to 67.4% for classifying between happy, sad, frustrated, anger, neutral and other emotions, obtained from the publicly available Interactive Emotional Dyadic Motion Capture (IEMOCAP) dataset. At the same time, the proposed approach reduces speaker identification accuracy to 50%, compared to 81%, the latter being achieved through a feedforward neural network solely trained on the speaker identification task using the same input features.
Year
DOI
Venue
2018
10.1145/3279972.3279980
MA3HMI@ICMI
Keywords
DocType
ISBN
Emotionally-aware conversational agents, speaker identity, Siamese neural network, repeated Gombertz Function, principal component analysis, multiplicative perturbation
Conference
978-1-4503-6076-0
Citations 
PageRank 
References 
0
0.34
0
Authors
2
Name
Order
Citations
PageRank
Priya Arora101.01
Theodora Chaspari23819.43