Abstract
In this paper, we propose a novel and efficient method for knowledge distillation, which is structurally simple and requires negligible computation overhead. Our method includes three modules. The first module is the calibrated mask, which avoids the teacher model's incorrect representation to disturb the student model's training; the second module and the third module improve the performance of the student model by the similarity of the sample and the process, respectively. The student model attains better performance in qualitative and quantitative evaluation through the judicious amalgamation of these three modules. Our method is experimented with through rigorous validation of canonical datasets, including CIFAR-100 and TinyImageNet. The experimental corroboration conclusively attests to the better performance of our method, soaring above the extant most state-of-the-art on both subjective and objective dimensions.
| Original language | English |
|---|---|
| Pages (from-to) | 5770-5774 |
| Number of pages | 5 |
| Journal | Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing |
| DOIs | |
| Publication status | Published - 2024 |
| Externally published | Yes |
| Event | 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Seoul, Korea, Republic of Duration: 14 Apr 2024 → 19 Apr 2024 |
Keywords
- Classification
- Computer Vision
- Deep Learning
- Knowledge Distillation
- Model Compression