TY - JOUR
T1 - Turning Dust into Gold
T2 - 38th AAAI Conference on Artificial Intelligence, AAAI 2024
AU - Li, Yiwei
AU - Yuan, Peiwen
AU - Feng, Shaoxiong
AU - Pan, Boyuan
AU - Sun, Bin
AU - Wang, Xinglin
AU - Wang, Heda
AU - Li, Kan
N1 - Publisher Copyright:
© 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2024/3/25
Y1 - 2024/3/25
N2 - Large Language Models (LLMs) have performed well on various reasoning tasks, but their inaccessibility and numerous parameters hinder wide application in practice. One promising way is distilling the reasoning ability from LLMs to small models by the generated chain-of-thought reasoning paths. In some cases, however, LLMs may produce incorrect reasoning chains, especially when facing complex mathematical problems. Previous studies only transfer knowledge from positive samples and drop the synthesized data with wrong answers. In this work, we illustrate the merit of negative data and propose a model specialization framework to distill LLMs with negative samples besides positive ones. The framework consists of three progressive steps, covering from training to inference stages, to absorb knowledge from negative data. We conduct extensive experiments across arithmetic reasoning tasks to demonstrate the role of negative data in distillation from LLM.
AB - Large Language Models (LLMs) have performed well on various reasoning tasks, but their inaccessibility and numerous parameters hinder wide application in practice. One promising way is distilling the reasoning ability from LLMs to small models by the generated chain-of-thought reasoning paths. In some cases, however, LLMs may produce incorrect reasoning chains, especially when facing complex mathematical problems. Previous studies only transfer knowledge from positive samples and drop the synthesized data with wrong answers. In this work, we illustrate the merit of negative data and propose a model specialization framework to distill LLMs with negative samples besides positive ones. The framework consists of three progressive steps, covering from training to inference stages, to absorb knowledge from negative data. We conduct extensive experiments across arithmetic reasoning tasks to demonstrate the role of negative data in distillation from LLM.
UR - http://www.scopus.com/inward/record.url?scp=85189643705&partnerID=8YFLogxK
U2 - 10.1609/aaai.v38i17.29821
DO - 10.1609/aaai.v38i17.29821
M3 - Conference article
AN - SCOPUS:85189643705
SN - 2159-5399
VL - 38
SP - 18591
EP - 18599
JO - Proceedings of the AAAI Conference on Artificial Intelligence
JF - Proceedings of the AAAI Conference on Artificial Intelligence
IS - 17
Y2 - 20 February 2024 through 27 February 2024
ER -