AudioGest: Gesture-based Interaction for Virtual Reality using Audio Devices

Tong Liu, Yi Xiao, Mingwei Hu, Hao Sha, Shining Ma, Boyu Gao, Shihui Guo, Yue Liu, Weitao Song

科研成果: 期刊稿件文章同行评审

摘要

Current virtual reality (VR) system takes gesture interaction based on camera, handle and touch screen as one of the mainstream interaction methods, which can provide accurate gesture input for it. However, limited by application forms and the volume of devices, these methods cannot extend the interaction area to such surfaces as walls and tables. To address the above challenge, we propose AudioGest, a portable, plug-and-play system that detects the audio signal generated by finger tapping and sliding on the surface through a set of microphone devices without extensive calibration. First, an audio synthesis-recognition pipeline based on micro-contact dynamics simulation is constructed to generate modal audio synthesis from different materials and physical properties. Then the accuracy and effectiveness of the synthetic audio are verified by mixing the synthetic audio with real audio proportionally as the training sets. Finally, a series of desktop office applications are developed to demonstrate the application potential of AudioGest's scalability and versatility in VR scenarios.

源语言英语
页(从-至)1-13
页数13
期刊IEEE Transactions on Visualization and Computer Graphics
DOI
出版状态已接受/待刊 - 2024

指纹

探究 'AudioGest: Gesture-based Interaction for Virtual Reality using Audio Devices' 的科研主题。它们共同构成独一无二的指纹。

引用此