The effects of crossmodal semantic reliability for audiovisual immersion experience of virtual reality

Hongtao Yu, Qiong Wu, Mengni Zhou, Qi Li, Jiajia Yang, Satoshi Takahashi, Yoshimichi Ejima, Jinglong Wu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Previous studies have reported that immersion experience can be improved by pairing it with auditory-visual stimuli; however, whether the semantic relationship of auditory-visual stimuli can also modulate visual virtual experience remains unclear. By using the psychophysics method, this study investigates the category performance under three different crossmodal semantic reliability conditions: semantically reliable, semantically unreliable and semantically uncertain. The results revealed a faster category speed for the crossmodal semantically reliable condition regardless of the category, indicating that crossmodal semantic reliability led to sufficient multisensory integration. In particular, under crossmodal semantically unreliable conditions, category speed was faster for non-living stimuli, indicating robust representation for non-living objects. These results indicate that adopting semantically reliable visual and auditory stimuli as multisensory inputs can efficiently improve the multisensory immersion experience.

Original languageEnglish
Pages (from-to)161-171
Number of pages11
JournalInternational Journal of Mechatronics and Automation
Volume9
Issue number4
DOIs
Publication statusPublished - 2022
Externally publishedYes

Keywords

  • audiovisual integration
  • multisensory presence
  • selective attention
  • semantic category
  • semantic reliability
  • virtual reality

Fingerprint

Dive into the research topics of 'The effects of crossmodal semantic reliability for audiovisual immersion experience of virtual reality'. Together they form a unique fingerprint.

Cite this