Study Selectively: An Adaptive Knowledge Distillation based on a Voting Network for Heart Sound Classification

Xihang Qiu, Lixian Zhu, Zikai Song, Zeyu Chen, Haojie Zhang, Kun Qian*, Ye Zhang*, Bin Hu*, Yoshiharu Yamamoto, Björn W. Schuller

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

5 Citations (Scopus)

Abstract

Phonocardiogram classification methods using deep neural networks have been widely applied to the early detection of cardiovascular diseases recently. Despite their excellent recognition rate, the sizeable computational complexity limits their further development. Nowadays, knowledge distillation (KD) is an established paradigm for model compression. While current research on multi-teacher KD has shown potential to impart more comprehensive knowledge to the student than single-teacher KD, this approach is not suitable for all scenarios. This paper proposes a novel KD strategy to realise an adaptive multi-teacher instruction mechanism. We design a teacher selection strategy called voting network to tell the contribution of different teachers on each distillation points, so that the student can choose the useful information and renounce the redundant one. An evaluation demonstrates that our method reaches excellent accuracy (92.8 %) while maintaining a low computational complexity (0.7 M).

Original languageEnglish
Pages (from-to)137-141
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
DOIs
Publication statusPublished - 2024
Event25th Interspeech Conferece 2024 - Kos Island, Greece
Duration: 1 Sept 20245 Sept 2024

Keywords

  • Adaptive Knowledge Distillation
  • Computer Audition
  • Heart Sound

Cite this