Latency-Aware Generative Semantic Communications With Pre-Trained Diffusion Models

Li Qiao, Mahdi Boloursaz Mashhadi, Zhen Gao*, Chuan Heng Foh, Pei Xiao, Mehdi Bennis

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

Generative foundation AI models have recently shown great success in synthesizing natural signals with high perceptual quality using only textual prompts and conditioning signals to guide the generation process. This enables semantic communications at extremely low data rates in future wireless networks. In this letter, we develop a latency-aware semantic communications framework with pre-trained generative models. The transmitter performs multi-modal semantic decomposition on the input signal and transmits each semantic stream with the appropriate coding and communication schemes based on the intent. For the prompt, we adopt a re-transmission-based scheme to ensure reliable transmission, and for the other semantic modalities we use an adaptive modulation/coding scheme to achieve robustness to the changing wireless channel. Furthermore, we design a semantic and latency-aware scheme to allocate transmission power to different semantic modalities based on their importance subjected to semantic quality constraints. At the receiver, a pre-trained generative model synthesizes a high fidelity signal using the received multi-stream semantics. Simulation results demonstrate ultra-low-rate, low-latency, and channel-adaptive semantic communications.

Original languageEnglish
Pages (from-to)2652-2656
Number of pages5
JournalIEEE Wireless Communications Letters
Volume13
Issue number10
DOIs
Publication statusPublished - 2024

Keywords

  • Generative AI
  • pre-trained foundation models
  • semantic communication
  • stable diffusion

Fingerprint

Dive into the research topics of 'Latency-Aware Generative Semantic Communications With Pre-Trained Diffusion Models'. Together they form a unique fingerprint.

Cite this