Towards Diverse and Efficient Audio Captioning via Diffusion Models

  • Manjie Xu
  • , Chenxing Li*
  • , Yong Ren
  • , Xinyi Tu
  • , Ruibo Fu
  • , Wei Liang*
  • , Dong Yu*
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

We introduce Diffusion-based Audio Captioning (DAC), a non-autoregressive diffusion model tailored for diverse and efficient audio captioning. Although existing captioning models relying on language backbones have achieved remarkable success in various captioning tasks, their insufficient performance in terms of generation speed and diversity impedes progress in audio understanding and multimedia applications. Our diffusion-based framework offers unique advantages stemming from its inherent stochasticity and holistic context modeling in captioning. Through rigorous evaluation, we demonstrate that DAC not only achieves superior performance levels compared to existing benchmarks in the caption quality, but also significantly outperforms them in terms of generation speed and diversity.

Original languageEnglish
Pages (from-to)191-195
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
DOIs
Publication statusPublished - 2025
Event26th Interspeech Conference 2025 - Rotterdam, Netherlands
Duration: 17 Aug 202521 Aug 2025

Keywords

  • audio captioning
  • diffusion model

Fingerprint

Dive into the research topics of 'Towards Diverse and Efficient Audio Captioning via Diffusion Models'. Together they form a unique fingerprint.

Cite this