TY - GEN
T1 - Detection of depression in speech
AU - Liu, Zhenyu
AU - Hu, Bin
AU - Yan, Lihua
AU - Wang, Tianyang
AU - Liu, Fei
AU - Li, Xiaoyu
AU - Kang, Huanyu
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/12/2
Y1 - 2015/12/2
N2 - Depression is a common mental disorder and one of the main causes of disability worldwide. Lacking objective depressive disorder assessment methods is the key reason that many depressive patients can't be treated properly. Developments in affective sensing technology with a focus on acoustic features will potentially bring a change due to depressed patient's slow, hesitating, monotonous voice as remarkable characteristics. Our motivation is to find out a speech feature set to detect, evaluate and even predict depression. For these goals, we investigate a large sample of 300 subjects (100 depressed patients, 100 healthy controls and 100 high-risk people) through comparative analysis and follow-up study. For examining the correlation between depression and speech, we extract features as many as possible according to previous research to create a large voice feature set. Then we employ some feature selection methods to eliminate irrelevant, redundant and noisy features to form a compact subset. To measure effectiveness of this new subset, we test it on our dataset with 300 subjects using several common classifiers and 10-fold cross-validation. Since we are collecting data currently, we have no result to report yet.
AB - Depression is a common mental disorder and one of the main causes of disability worldwide. Lacking objective depressive disorder assessment methods is the key reason that many depressive patients can't be treated properly. Developments in affective sensing technology with a focus on acoustic features will potentially bring a change due to depressed patient's slow, hesitating, monotonous voice as remarkable characteristics. Our motivation is to find out a speech feature set to detect, evaluate and even predict depression. For these goals, we investigate a large sample of 300 subjects (100 depressed patients, 100 healthy controls and 100 high-risk people) through comparative analysis and follow-up study. For examining the correlation between depression and speech, we extract features as many as possible according to previous research to create a large voice feature set. Then we employ some feature selection methods to eliminate irrelevant, redundant and noisy features to form a compact subset. To measure effectiveness of this new subset, we test it on our dataset with 300 subjects using several common classifiers and 10-fold cross-validation. Since we are collecting data currently, we have no result to report yet.
KW - acoustic feature
KW - depression
KW - feature selection
KW - speech
UR - http://www.scopus.com/inward/record.url?scp=84964089139&partnerID=8YFLogxK
U2 - 10.1109/ACII.2015.7344652
DO - 10.1109/ACII.2015.7344652
M3 - Conference contribution
AN - SCOPUS:84964089139
T3 - 2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015
SP - 743
EP - 747
BT - 2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015
Y2 - 21 September 2015 through 24 September 2015
ER -