TY - GEN
T1 - Lifting the Curse of Capacity Gap in Distilling Language Models
AU - Zhang, Chen
AU - Yang, Yang
AU - Liu, Jiahao
AU - Wang, Jingang
AU - Xian, Yunsen
AU - Wang, Benyou
AU - Song, Dawei
N1 - Publisher Copyright:
© 2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - Pretrained language models (LMs) have shown compelling performance on various downstream tasks, but unfortunately they require a tremendous amount of inference compute. Knowledge distillation finds a path to compress LMs to small ones with a teacher-student paradigm. However, when the capacity gap between the teacher and the student is large, a curse of capacity gap appears, invoking a deficiency in distilling LMs. While a few studies have been carried out to fill the gap, the curse is not yet well tackled. In this paper, we aim at lifting the curse of capacity gap via enlarging the capacity of the student without notably increasing the inference compute. Largely motivated by sparse activation regime of mixture of experts (MOE), we propose a mixture of minimal experts (MINIMOE), which imposes extra parameters to the student but introduces almost no additional inference compute. Experimental results on GLUE and CoNLL demonstrate the curse of capacity gap is lifted by the magic of MINIMOE to a large extent. MINIMOE also achieves the state-of-the-art performance at small FLOPs compared with a range of competitive baselines. With a compression rate as much as ∼50×, MINIMOE preserves ∼95% GLUE score of the teacher.
AB - Pretrained language models (LMs) have shown compelling performance on various downstream tasks, but unfortunately they require a tremendous amount of inference compute. Knowledge distillation finds a path to compress LMs to small ones with a teacher-student paradigm. However, when the capacity gap between the teacher and the student is large, a curse of capacity gap appears, invoking a deficiency in distilling LMs. While a few studies have been carried out to fill the gap, the curse is not yet well tackled. In this paper, we aim at lifting the curse of capacity gap via enlarging the capacity of the student without notably increasing the inference compute. Largely motivated by sparse activation regime of mixture of experts (MOE), we propose a mixture of minimal experts (MINIMOE), which imposes extra parameters to the student but introduces almost no additional inference compute. Experimental results on GLUE and CoNLL demonstrate the curse of capacity gap is lifted by the magic of MINIMOE to a large extent. MINIMOE also achieves the state-of-the-art performance at small FLOPs compared with a range of competitive baselines. With a compression rate as much as ∼50×, MINIMOE preserves ∼95% GLUE score of the teacher.
UR - http://www.scopus.com/inward/record.url?scp=85174400901&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85174400901
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 4535
EP - 4553
BT - Long Papers
PB - Association for Computational Linguistics (ACL)
T2 - 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
Y2 - 9 July 2023 through 14 July 2023
ER -