Title
Coupled HMM-based multimodal fusion for mood disorder detection through elicited audio-visual signals.
Abstract
Mood disorders encompass a wide array of mood issues, including unipolar depression (UD) and bipolar disorder (BD). In diagnostic evaluation on the outpatients with mood disorder, a high percentage of BD patients are initially misdiagnosed as having UD. It is crucial to establish an accurate distinction between BD and UD to make a correct and early diagnosis, leading to improvements in treatment and course of illness. In this study, eliciting emotional videos are firstly used to elicit the patients’ emotions. After watching each video clips, their facial expressions and speech responses are collected when they are interviewing with a clinician. In mood disorder detection, the facial action unit (AU) profiles and speech emotion profiles (EPs) are obtained, respectively, by using the support vector machines (SVMs) which are built via facial features and speech features adapted from two selected databases using a denoising autoencoder-based method. Finally, a Coupled Hidden Markov Model (CHMM)-based fusion method is proposed to characterize the temporal information. The CHMM is modified to fuse the AUs and the EPs with respect to six emotional videos. Experimental results show the promising advantage and efficacy of the CHMM-based fusion approach for mood disorder detection.
Year
DOI
Venue
2017
10.1007/s12652-016-0395-y
J. Ambient Intelligence and Humanized Computing
Keywords
Field
DocType
Mood disorder detection, Coupled Hidden Markov Model, Multimodal fusion, Autoencoder adaptation
Mood,Bipolar disorder,Mood disorders,Computer science,Support vector machine,Speech recognition,Facial expression,Denoising autoencoder,Hidden Markov model
Journal
Volume
Issue
ISSN
8
6
1868-5145
Citations 
PageRank 
References 
3
0.39
22
Authors
4
Name
Order
Citations
PageRank
Tsung-Hsien Yang1454.71
Chung-Hsien Wu21099116.79
Kun-Yi Huang3145.00
Ming-Hsiang Su4216.83