Evaluating and Improving GPT-Based Expansion of Abbreviations

  • Yanjie Jiang
  • , Chenxu Li
  • , Zixiao Zhao
  • , Fu Fan
  • , Lu Zhang
  • , Hui Liu*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Source code identifiers often contain abbreviations. Such abbreviations may reduce the readability of the source code, which in turn hinders the maintenance of the software applications. To this end, accurate and automated approaches to expanding abbreviations in source code are desirable and abbreviation expansion has been intensively investigated. However, to the best of our knowledge, most existing approaches are heuristics, and none of them has even employed deep learning techniques, let alone the most advanced large language models (LLMs). LLMs have demonstrated cutting-edge performance in various software engineering tasks, and thus it has the potential to expand abbreviations automatically. To this end, in this paper, we present the first empirical study on prompt-based usage of LLMs for abbreviation expansion. Our evaluation results on a public benchmark suggest that GPT is substantially less accurate than the state-of-the-art approach, reducing precision and recall by 28.2% and 27.8%, respectively. We manually analyzed the failed cases, and discovered the root causes for the failures: 1) Lack of contexts and 2) Inability to recognize abbreviations. In response to the first cause, we investigated the effect of various contexts and found surrounding source code is the best selection. In response to the second cause, we designed an iterative approach that identifies and explicitly marks missed abbreviations in prompts. Finally, we propose a post-condition checking to exclude incorrect expansions that violate common sense. All such measures together make LLM-based abbreviation expansion comparable to the state of the art while avoiding expensive source code parsing and deep analysis that are indispensable for state-of-the-art approaches. Our evaluation results on open-source LLMs, i.e., DeepSeek-Coder and Llama, confirm that the post-condition checking works well with various LLMs.

Original languageEnglish
Pages (from-to)3591-3607
Number of pages17
JournalIEEE Transactions on Software Engineering
Volume51
Issue number12
DOIs
Publication statusPublished - 2025

Keywords

  • Abbreviation
  • LLM
  • expansion

Fingerprint

Dive into the research topics of 'Evaluating and Improving GPT-Based Expansion of Abbreviations'. Together they form a unique fingerprint.

Cite this