Semantic Information Extraction and Multi-Agent Communication Optimization Based on Generative Pre-Trained Transformer

Li Zhou*, Xinfeng Deng, Zhe Wang, Xiaoying Zhang*, Yanjie Dong, Xiping Hu, Zhaolong Ning, Jibo Wei

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

The collaboration among multiple agents demands for efficient communication. However, the observational data in the multi-agent systems are typically voluminous and redundant and pose substantial challenges to the communication system when transmitted directly. To address this issue, this paper introduces a multi-agent communication scheme based on large language model (LLM), referred to as GPT-based semantic information extraction for multi-agent communication (GMAC). This scheme utilizes an LLM to extract semantic information and leverages the generative capabilities to predict subsequent actions, thereby enabling agents to make more informed decisions. The GMAC approach significantly reduces signaling expenditure exchanged among agents by extracting key semantic data via LLM. This method not only simplifies the communication process but also effectively reduces the communication overhead by approximately 53% compared to the baseline methods. Experimental results indicate that GMAC not only improves the convergence speed and accuracy of decision-making but also substantially decreases the signaling expenditure among agents. Consequently, GMAC offers a straightforward and effective method to achieve efficient and economical communication in the multi-agent systems.

Original languageEnglish
Pages (from-to)725-737
Number of pages13
JournalIEEE Transactions on Cognitive Communications and Networking
Volume11
Issue number2
DOIs
Publication statusPublished - 2025

Keywords

  • Generative AI
  • multi-agent
  • reinforcement learning
  • semantic communication

Cite this