TY - GEN
T1 - Fundamental Capabilities of Large Language Models and their Applications in Domain Scenarios
T2 - 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
AU - Li, Jiawei
AU - Yang, Yizhe
AU - Bai, Yu
AU - Zhou, Xiaofeng
AU - Li, Yinghao
AU - Sun, Huashan
AU - Liu, Yuhang
AU - Si, Xingpeng
AU - Ye, Yuhao
AU - Wu, Yixiao
AU - Lin, Yiguan
AU - Xu, Bin
AU - Ren, Bowen
AU - Feng, Chong
AU - Gao, Yang
AU - Huang, Heyan
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - Large Language Models (LLMs) demonstrate significant value in domain-specific applications, benefiting from their fundamental capabilities. Nevertheless, it is still unclear which fundamental capabilities contribute to success in specific domains. Moreover, the existing benchmark-based evaluation cannot effectively reflect the performance of real-world applications. In this survey, we review recent advances of LLMs in domain applications, aiming to summarize the fundamental capabilities and their collaboration. Furthermore, we establish connections between fundamental capabilities and specific domains, evaluating the varying importance of different capabilities. Based on our findings, we propose a reliable strategy for domains to choose more robust backbone LLMs for real-world applications.
AB - Large Language Models (LLMs) demonstrate significant value in domain-specific applications, benefiting from their fundamental capabilities. Nevertheless, it is still unclear which fundamental capabilities contribute to success in specific domains. Moreover, the existing benchmark-based evaluation cannot effectively reflect the performance of real-world applications. In this survey, we review recent advances of LLMs in domain applications, aiming to summarize the fundamental capabilities and their collaboration. Furthermore, we establish connections between fundamental capabilities and specific domains, evaluating the varying importance of different capabilities. Based on our findings, we propose a reliable strategy for domains to choose more robust backbone LLMs for real-world applications.
UR - http://www.scopus.com/inward/record.url?scp=85204445649&partnerID=8YFLogxK
U2 - 10.18653/v1/2024.acl-long.599
DO - 10.18653/v1/2024.acl-long.599
M3 - Conference contribution
AN - SCOPUS:85204445649
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 11116
EP - 11141
BT - Long Papers
A2 - Ku, Lun-Wei
A2 - Martins, Andre F. T.
A2 - Srikumar, Vivek
PB - Association for Computational Linguistics (ACL)
Y2 - 11 August 2024 through 16 August 2024
ER -