TY - JOUR
T1 - On Elastic Language Models
AU - Zhang, Chen
AU - Wang, Benyou
AU - Song, Dawei
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/10/18
Y1 - 2024/10/18
N2 - Large-scale pretrained language models have achieved compelling performance in a wide range of language understanding and information retrieval tasks. While their large scales ensure capacity, they also hinder deployment. Knowledge distillation offers an opportunity to compress a large language model to a small one, in order to reach a reasonable latency-performance tradeoff. However, for scenarios where the number of requests (e.g., queries submitted to a search engine) is highly variant, the static tradeoff attained by the compressed language model might not always fit. Once a model is assigned with a static tradeoff, it could be inadequate in that the latency is too high when the number of requests is large, or the performance is too low when the number of requests is small. To this end, we propose an elastic language model (ElasticLM) that elastically adjusts the tradeoff according to the request stream. The basic idea is to introduce a compute elasticity to the compressed language model, so that the tradeoff could vary on-the-fly along a scalable and controllable compute. Specifically, we impose an elastic structure to equip ElasticLM with compute elasticity and design an elastic optimization method to learn ElasticLM under compute elasticity. To serve ElasticLM, we apply an elastic schedule. Considering the specificity of information retrieval, we adapt ElasticLM to dense retrieval and reranking, and present an ElasticDenser and an ElasticRanker, respectively. Offline evaluation is conducted on a language understanding benchmark GLUE, and several information retrieval tasks including Natural Question, Trivia QA and MS MARCO. The results show that ElasticLM along with ElasticDenser and ElasticRanker can perform correctly and competitively compared with an array of static baselines. Furthermore, an online simulation with concurrency is also carried out. The results demonstrate that ElasticLM can provide elastic tradeoffs with respect to varying request stream.
AB - Large-scale pretrained language models have achieved compelling performance in a wide range of language understanding and information retrieval tasks. While their large scales ensure capacity, they also hinder deployment. Knowledge distillation offers an opportunity to compress a large language model to a small one, in order to reach a reasonable latency-performance tradeoff. However, for scenarios where the number of requests (e.g., queries submitted to a search engine) is highly variant, the static tradeoff attained by the compressed language model might not always fit. Once a model is assigned with a static tradeoff, it could be inadequate in that the latency is too high when the number of requests is large, or the performance is too low when the number of requests is small. To this end, we propose an elastic language model (ElasticLM) that elastically adjusts the tradeoff according to the request stream. The basic idea is to introduce a compute elasticity to the compressed language model, so that the tradeoff could vary on-the-fly along a scalable and controllable compute. Specifically, we impose an elastic structure to equip ElasticLM with compute elasticity and design an elastic optimization method to learn ElasticLM under compute elasticity. To serve ElasticLM, we apply an elastic schedule. Considering the specificity of information retrieval, we adapt ElasticLM to dense retrieval and reranking, and present an ElasticDenser and an ElasticRanker, respectively. Offline evaluation is conducted on a language understanding benchmark GLUE, and several information retrieval tasks including Natural Question, Trivia QA and MS MARCO. The results show that ElasticLM along with ElasticDenser and ElasticRanker can perform correctly and competitively compared with an array of static baselines. Furthermore, an online simulation with concurrency is also carried out. The results demonstrate that ElasticLM can provide elastic tradeoffs with respect to varying request stream.
KW - compute elasticity
KW - dense retrieval
KW - Pretrained language models
UR - http://www.scopus.com/inward/record.url?scp=85208393523&partnerID=8YFLogxK
U2 - 10.1145/3677375
DO - 10.1145/3677375
M3 - Article
AN - SCOPUS:85208393523
SN - 1046-8188
VL - 42
JO - ACM Transactions on Information Systems
JF - ACM Transactions on Information Systems
IS - 6
M1 - 161
ER -