AudioGest: Gesture-based Interaction for Virtual Reality using Audio Devices

Tong Liu, Yi Xiao, Mingwei Hu, Hao Sha, Shining Ma, Boyu Gao, Shihui Guo, Yue Liu, Weitao Song

Research output: Contribution to journalArticlepeer-review

Abstract

Current virtual reality (VR) system takes gesture interaction based on camera, handle and touch screen as one of the mainstream interaction methods, which can provide accurate gesture input for it. However, limited by application forms and the volume of devices, these methods cannot extend the interaction area to such surfaces as walls and tables. To address the above challenge, we propose AudioGest, a portable, plug-and-play system that detects the audio signal generated by finger tapping and sliding on the surface through a set of microphone devices without extensive calibration. First, an audio synthesis-recognition pipeline based on micro-contact dynamics simulation is constructed to generate modal audio synthesis from different materials and physical properties. Then the accuracy and effectiveness of the synthetic audio are verified by mixing the synthetic audio with real audio proportionally as the training sets. Finally, a series of desktop office applications are developed to demonstrate the application potential of AudioGest's scalability and versatility in VR scenarios.

Original languageEnglish
Pages (from-to)1-13
Number of pages13
JournalIEEE Transactions on Visualization and Computer Graphics
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Data acquisition
  • Dynamics
  • Human computer interaction
  • Microphones
  • Pipelines
  • Rough surfaces
  • Surface roughness
  • Training
  • audio synthesis
  • gesture interaction
  • virtual reality

Fingerprint

Dive into the research topics of 'AudioGest: Gesture-based Interaction for Virtual Reality using Audio Devices'. Together they form a unique fingerprint.

Cite this