TY - GEN
T1 - An early study on intelligent analysis of speech under COVID-19
T2 - 21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020
AU - Han, Jing
AU - Qian, Kun
AU - Song, Meishu
AU - Yang, Zijiang
AU - Ren, Zhao
AU - Liu, Shuo
AU - Liu, Juan
AU - Zheng, Huaiyuan
AU - Ji, Wei
AU - Koike, Tomoya
AU - Li, Xiao
AU - Zhang, Zixing
AU - Yamamoto, Yoshiharu
AU - Schuller, Björn W.
N1 - Publisher Copyright:
© 2020 ISCA
PY - 2020
Y1 - 2020
N2 - The COVID-19 outbreak was announced as a global pandemic by the World Health Organisation in March 2020 and has affected a growing number of people in the past few weeks. In this context, advanced artificial intelligence techniques are brought to the fore in responding to fight against and reduce the impact of this global health crisis. In this study, we focus on developing some potential use-cases of intelligent speech analysis for COVID-19 diagnosed patients. In particular, by analysing speech recordings from these patients, we construct audio-only-based models to automatically categorise the health state of patients from four aspects, including the severity of illness, sleep quality, fatigue, and anxiety. For this purpose, two established acoustic feature sets and support vector machines are utilised. Our experiments show that an average accuracy of.69 obtained estimating the severity of illness, which is derived from the number of days in hospitalisation. We hope that this study can foster an extremely fast, low-cost, and convenient way to automatically detect the COVID-19 disease.
AB - The COVID-19 outbreak was announced as a global pandemic by the World Health Organisation in March 2020 and has affected a growing number of people in the past few weeks. In this context, advanced artificial intelligence techniques are brought to the fore in responding to fight against and reduce the impact of this global health crisis. In this study, we focus on developing some potential use-cases of intelligent speech analysis for COVID-19 diagnosed patients. In particular, by analysing speech recordings from these patients, we construct audio-only-based models to automatically categorise the health state of patients from four aspects, including the severity of illness, sleep quality, fatigue, and anxiety. For this purpose, two established acoustic feature sets and support vector machines are utilised. Our experiments show that an average accuracy of.69 obtained estimating the severity of illness, which is derived from the number of days in hospitalisation. We hope that this study can foster an extremely fast, low-cost, and convenient way to automatically detect the COVID-19 disease.
KW - COVID-19 diagnosis
KW - Computational paralinguistics
KW - Speech analysis
UR - http://www.scopus.com/inward/record.url?scp=85098144614&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2020-2223
DO - 10.21437/Interspeech.2020-2223
M3 - Conference contribution
AN - SCOPUS:85098144614
SN - 9781713820697
T3 - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
SP - 4946
EP - 4950
BT - Interspeech 2020
PB - International Speech Communication Association
Y2 - 25 October 2020 through 29 October 2020
ER -