TY - JOUR
T1 - Evaluating and Improving GPT-Based Expansion of Abbreviations
AU - Jiang, Yanjie
AU - Li, Chenxu
AU - Zhao, Zixiao
AU - Fan, Fu
AU - Zhang, Lu
AU - Liu, Hui
N1 - Publisher Copyright:
© 1976-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Source code identifiers often contain abbreviations. Such abbreviations may reduce the readability of the source code, which in turn hinders the maintenance of the software applications. To this end, accurate and automated approaches to expanding abbreviations in source code are desirable and abbreviation expansion has been intensively investigated. However, to the best of our knowledge, most existing approaches are heuristics, and none of them has even employed deep learning techniques, let alone the most advanced large language models (LLMs). LLMs have demonstrated cutting-edge performance in various software engineering tasks, and thus it has the potential to expand abbreviations automatically. To this end, in this paper, we present the first empirical study on prompt-based usage of LLMs for abbreviation expansion. Our evaluation results on a public benchmark suggest that GPT is substantially less accurate than the state-of-the-art approach, reducing precision and recall by 28.2% and 27.8%, respectively. We manually analyzed the failed cases, and discovered the root causes for the failures: 1) Lack of contexts and 2) Inability to recognize abbreviations. In response to the first cause, we investigated the effect of various contexts and found surrounding source code is the best selection. In response to the second cause, we designed an iterative approach that identifies and explicitly marks missed abbreviations in prompts. Finally, we propose a post-condition checking to exclude incorrect expansions that violate common sense. All such measures together make LLM-based abbreviation expansion comparable to the state of the art while avoiding expensive source code parsing and deep analysis that are indispensable for state-of-the-art approaches. Our evaluation results on open-source LLMs, i.e., DeepSeek-Coder and Llama, confirm that the post-condition checking works well with various LLMs.
AB - Source code identifiers often contain abbreviations. Such abbreviations may reduce the readability of the source code, which in turn hinders the maintenance of the software applications. To this end, accurate and automated approaches to expanding abbreviations in source code are desirable and abbreviation expansion has been intensively investigated. However, to the best of our knowledge, most existing approaches are heuristics, and none of them has even employed deep learning techniques, let alone the most advanced large language models (LLMs). LLMs have demonstrated cutting-edge performance in various software engineering tasks, and thus it has the potential to expand abbreviations automatically. To this end, in this paper, we present the first empirical study on prompt-based usage of LLMs for abbreviation expansion. Our evaluation results on a public benchmark suggest that GPT is substantially less accurate than the state-of-the-art approach, reducing precision and recall by 28.2% and 27.8%, respectively. We manually analyzed the failed cases, and discovered the root causes for the failures: 1) Lack of contexts and 2) Inability to recognize abbreviations. In response to the first cause, we investigated the effect of various contexts and found surrounding source code is the best selection. In response to the second cause, we designed an iterative approach that identifies and explicitly marks missed abbreviations in prompts. Finally, we propose a post-condition checking to exclude incorrect expansions that violate common sense. All such measures together make LLM-based abbreviation expansion comparable to the state of the art while avoiding expensive source code parsing and deep analysis that are indispensable for state-of-the-art approaches. Our evaluation results on open-source LLMs, i.e., DeepSeek-Coder and Llama, confirm that the post-condition checking works well with various LLMs.
KW - Abbreviation
KW - LLM
KW - expansion
UR - https://www.scopus.com/pages/publications/105019777681
U2 - 10.1109/TSE.2025.3623625
DO - 10.1109/TSE.2025.3623625
M3 - Article
AN - SCOPUS:105019777681
SN - 0098-5589
VL - 51
SP - 3591
EP - 3607
JO - IEEE Transactions on Software Engineering
JF - IEEE Transactions on Software Engineering
IS - 12
ER -