Audio-Visual LLM for Augmenting Accessibility of 360° Video

  • Yujia Wang*
  • , Qingyun Deng
  • , Wei Liang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Creators of 360° videos utilize affluent non-speech sounds for providing immersive experiences. The sound accessibility of such videos is essential for viewers, especially for d/Deaf and hard-of-hearing (DHH) people. In this paper, we propose AVLLM-360, a multimodal framework using Large Language Models (LLMs) for understanding panorama video content and providing sound descriptions, which goes beyond the simple recognition of sound types. AVLLM-360 integrates both visual and auditory information and bootstraps the cross-modal training from the pre-trained LLM. We also implemented a mixed-media interface that allows users to visualize the generated results hierarchically, enabling personalized customization of sound description generation when watching 360° videos. We conducted extensive experiments to evaluate AVLLM-360’s ability across a range of video understanding tasks. We also conducted qualitative studies with 12 DHH participants, evaluating the effectiveness of our AVLLM-360 using 24 360° videos (covering different genres).

Original languageEnglish
Pages (from-to)1433-1445
Number of pages13
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume36
Issue number2
DOIs
Publication statusPublished - 2026
Externally publishedYes

Keywords

  • 360° video
  • Audio-visual
  • accessibility
  • large language model
  • sound description generation

Fingerprint

Dive into the research topics of 'Audio-Visual LLM for Augmenting Accessibility of 360° Video'. Together they form a unique fingerprint.

Cite this