Title
Learning Voice Source Related Information For Depression Detection
Abstract
During depression neurophysiological changes can occur, which may affect laryngeal control i. e. behaviour of the vocal folds. Characterising these changes in a precise manner from speech signals is a non trivial task, as this typically involves reliable separation of the voice source information from them. In this paper, by exploiting the abilities of CNNs to learn task-relevant information from the input raw signals, we investigate several methods to model voice source related information for depression detection. Specifically, we investigate modelling of low pass filtered speech signals, linear prediction residual signals, homomorphically filtered voice source signals and zero frequency filtered signals to learn voice source related information for depression detection. Our investigations show that subsegmental level modelling of linear prediction residual signals or zero frequency filtered signals leads to systems better than the state-of-the-art low level descriptor based systems and deep learning based systems modelling the vocal tract system information.
Year
DOI
Venue
2019
10.1109/icassp.2019.8683498
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
Keywords
Field
DocType
Convolutional neural networks, depression detection, zero-frequency filtering, glottal source signals
Residual,Vocal folds,Pattern recognition,Neurophysiology,Computer science,Convolutional neural network,Linear prediction,Low-pass filter,Artificial intelligence,Deep learning,Vocal tract
Conference
ISSN
Citations 
PageRank 
1520-6149
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
S. Pavankumar Dubagunta101.69
Bogdan Vlasenko200.34
Mathew Magimai-Doss351654.76