跳到主要导航 跳到搜索 跳到主要内容

Toward Effective Knowledge Distillation: Navigating Beyond Small-Data Pitfall

  • Zhiwei Hao
  • , Jianyuan Guo
  • , Kai Han
  • , Han Hu*
  • , Chang Xu
  • , Yunhe Wang
  • *此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

The spectacular success of training large models on extensive datasets highlights the potential of scaling up for exceptional performance. To deploy these models on edge devices, knowledge distillation (KD) is commonly used to create a compact model from a larger, pretrained teacher model. However, as models and datasets rapidly scale up in practical applications, it is crucial to consider the applicability of existing KD approaches originally designed for limited-capacity architectures and small-scale datasets. In this paper, we revisit current KD methods and identify the presence of a small-data pitfall, where most modifications to vanilla KD prove ineffective on large-scale datasets. To guide the design of consistently effective KD methods across different data scales, we conduct a meticulous evaluation of the knowledge transfer process. Our findings reveal that incorporating more useful information is crucial for achieving consistently effective KD methods, while modifications in loss functions show relatively less significance. In light of this, we present a paradigmatic example that combines vanilla KD with deep supervision, incorporating additional information into the student during distillation. This approach surpasses almost all recent KD methods. We believe our study will offer valuable insights to guide the community in navigating beyond the small-data pitfall and toward consistently effective KD.

源语言英语
页(从-至)542-556
页数15
期刊IEEE Transactions on Pattern Analysis and Machine Intelligence
48
1
DOI
出版状态已出版 - 2026
已对外发布

指纹

探究 'Toward Effective Knowledge Distillation: Navigating Beyond Small-Data Pitfall' 的科研主题。它们共同构成独一无二的指纹。

引用此