2-level hierarchical depression recognition method based on task-stimulated and integrated speech features

Yujuan Xing, Zhenyu Liu*, Gang Li, Zhi Jie Ding, Bin Hu

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

8 引用 (Scopus)

摘要

Depression had been paid more and more attention by researchers because of its high prevalence, recurrence, disability and mortality. Speech depression recognition had become a research hotspot due to its advantages of non-invasiveness and easy access to data. However, the problems such as the speech variation in different emotional stimulus, gender impact, the speaker and channel variation and the variable length of frame feature, would have a great impact on recognition performance. In order to solve these problems, a novel 2-level hierarchical depression recognition method was proposed in this paper. It contained two stages. In 1st-level classification stage, i-vectors were extracted based on spectral features, prosodic features, formants and voice quality of speech segments in different task stimulus respectively. Then, support vector machine (SVM) and random forest (RF) were used to obtain primary results. In the stage of 2nd-level classification, the results of tasks with significant accuracy differences were aggregated into new integrated features. The final result was achieved on new features by SVM. Our experiments were based on the depression speech database of the Gansu Provincial Key Laboratory of Wearable Computing. The experimental results showed that the proposed method had achieved good results in both gender-independent and gender-dependent experiments. Compared with baseline method and bagging classification, the highest accuracy of our method was raised by 9.62% and 9.49% respectively in gender-independent experiments, and F1 score also got improvement obviously. The results also showed that our method had better robustness on gender effect.

源语言英语
文章编号103287
期刊Biomedical Signal Processing and Control
72
DOI
出版状态已出版 - 2月 2022
已对外发布

指纹

探究 '2-level hierarchical depression recognition method based on task-stimulated and integrated speech features' 的科研主题。它们共同构成独一无二的指纹。

引用此