Abstract
Phonocardiogram classification methods using deep neural networks have been widely applied to the early detection of cardiovascular diseases recently. Despite their excellent recognition rate, the sizeable computational complexity limits their further development. Nowadays, knowledge distillation (KD) is an established paradigm for model compression. While current research on multi-teacher KD has shown potential to impart more comprehensive knowledge to the student than single-teacher KD, this approach is not suitable for all scenarios. This paper proposes a novel KD strategy to realise an adaptive multi-teacher instruction mechanism. We design a teacher selection strategy called voting network to tell the contribution of different teachers on each distillation points, so that the student can choose the useful information and renounce the redundant one. An evaluation demonstrates that our method reaches excellent accuracy (92.8 %) while maintaining a low computational complexity (0.7 M).
Original language | English |
---|---|
Pages (from-to) | 137-141 |
Number of pages | 5 |
Journal | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
DOIs | |
Publication status | Published - 2024 |
Event | 25th Interspeech Conferece 2024 - Kos Island, Greece Duration: 1 Sept 2024 → 5 Sept 2024 |
Keywords
- Adaptive Knowledge Distillation
- Computer Audition
- Heart Sound