TY - JOUR
T1 - Learning representations from heart sound
T2 - A comparative study on shallow and deep models
AU - Qian, Kun
AU - Bao, Zhihao
AU - Zhao, Zhonghao
AU - Koike, Tomoya
AU - Dong, Fengquan
AU - Schmitt, Maximilian
AU - Dong, Qunxi
AU - Shen, Jian
AU - Jiang, Weipeng
AU - Jiang, Yajuan
AU - Dong, Bo
AU - Dai, Zhenyu
AU - Hu, Bin
AU - Schuller, Björn W.
AU - Yamamoto, Yoshiharu
N1 - Publisher Copyright:
© 2024 Kun Qian et al.
PY - 2024
Y1 - 2024
N2 - Leveraging the power of artificial intelligence to facilitate an automatic analysis and monitoring of heart sounds has increasingly attracted tremendous efforts in the past decade. Nevertheless, lacking on standard open-Access database made it difficult to maintain a sustainable and comparable research before the first release of the PhysioNet CinC Challenge Dataset. However, inconsistent standards on data collection, annotation, and partition are still restraining a fair and efficient comparison between different works. To this line, we introduced and benchmarked a first version of the Heart Sounds Shenzhen (HSS) corpus. Motivated and inspired by the previous works based on HSS, we redefined the tasks and make a comprehensive investigation on shallow and deep models in this study. First, we segmented the heart sound recording into shorter recordings (10 s), which makes it more similar to the human auscultation case. Second, we redefined the classification tasks. Besides using the 3 class categories (normal, moderate, and mild/severe) adopted in HSS, we added a binary classification task in this study, i.e., normal and abnormal. In this work, we provided detailed benchmarks based on both the classic machine learning and the state-of-The-Art deep learning technologies, which are reproducible by using open-source toolkits. Last but not least, we analyzed the feature contributions of best performance achieved by the benchmark to make the results more convincing and interpretable.
AB - Leveraging the power of artificial intelligence to facilitate an automatic analysis and monitoring of heart sounds has increasingly attracted tremendous efforts in the past decade. Nevertheless, lacking on standard open-Access database made it difficult to maintain a sustainable and comparable research before the first release of the PhysioNet CinC Challenge Dataset. However, inconsistent standards on data collection, annotation, and partition are still restraining a fair and efficient comparison between different works. To this line, we introduced and benchmarked a first version of the Heart Sounds Shenzhen (HSS) corpus. Motivated and inspired by the previous works based on HSS, we redefined the tasks and make a comprehensive investigation on shallow and deep models in this study. First, we segmented the heart sound recording into shorter recordings (10 s), which makes it more similar to the human auscultation case. Second, we redefined the classification tasks. Besides using the 3 class categories (normal, moderate, and mild/severe) adopted in HSS, we added a binary classification task in this study, i.e., normal and abnormal. In this work, we provided detailed benchmarks based on both the classic machine learning and the state-of-The-Art deep learning technologies, which are reproducible by using open-source toolkits. Last but not least, we analyzed the feature contributions of best performance achieved by the benchmark to make the results more convincing and interpretable.
UR - http://www.scopus.com/inward/record.url?scp=85191544573&partnerID=8YFLogxK
U2 - 10.34133/cbsystems.0075
DO - 10.34133/cbsystems.0075
M3 - Article
AN - SCOPUS:85191544573
SN - 2097-1087
VL - 5
JO - Cyborg and Bionic Systems
JF - Cyborg and Bionic Systems
M1 - 0075
ER -