Learning representations from heart sound: A comparative study on shallow and deep models

Kun Qian, Zhihao Bao, Zhonghao Zhao, Tomoya Koike, Fengquan Dong*, Maximilian Schmitt, Qunxi Dong, Jian Shen, Weipeng Jiang, Yajuan Jiang, Bo Dong, Zhenyu Dai, Bin Hu*, Björn W. Schuller, Yoshiharu Yamamoto

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Leveraging the power of artificial intelligence to facilitate an automatic analysis and monitoring of heart sounds has increasingly attracted tremendous efforts in the past decade. Nevertheless, lacking on standard open-Access database made it difficult to maintain a sustainable and comparable research before the first release of the PhysioNet CinC Challenge Dataset. However, inconsistent standards on data collection, annotation, and partition are still restraining a fair and efficient comparison between different works. To this line, we introduced and benchmarked a first version of the Heart Sounds Shenzhen (HSS) corpus. Motivated and inspired by the previous works based on HSS, we redefined the tasks and make a comprehensive investigation on shallow and deep models in this study. First, we segmented the heart sound recording into shorter recordings (10 s), which makes it more similar to the human auscultation case. Second, we redefined the classification tasks. Besides using the 3 class categories (normal, moderate, and mild/severe) adopted in HSS, we added a binary classification task in this study, i.e., normal and abnormal. In this work, we provided detailed benchmarks based on both the classic machine learning and the state-of-The-Art deep learning technologies, which are reproducible by using open-source toolkits. Last but not least, we analyzed the feature contributions of best performance achieved by the benchmark to make the results more convincing and interpretable.

源语言英语
文章编号0075
期刊Cyborg and Bionic Systems
5
DOI
出版状态已出版 - 2024

指纹

探究 'Learning representations from heart sound: A comparative study on shallow and deep models' 的科研主题。它们共同构成独一无二的指纹。

引用此