TY - JOUR
T1 - AudioGest
T2 - Gesture-Based Interaction for Virtual Reality Using Audio Devices
AU - Liu, Tong
AU - Xiao, Yi
AU - Hu, Mingwei
AU - Sha, Hao
AU - Ma, Shining
AU - Gao, Boyu
AU - Guo, Shihui
AU - Liu, Yue
AU - Song, Weitao
N1 - Publisher Copyright:
© 1995-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Current virtual reality (VR) system takes gesture interaction based on camera, handle and touch screen as one of the mainstream interaction methods, which can provide accurate gesture input for it. However, limited by application forms and the volume of devices, these methods cannot extend the interaction area to such surfaces as walls and tables. To address the above challenge, we propose AudioGest, a portable, plug-and-play system that detects the audio signal generated by finger tapping and sliding on the surface through a set of microphone devices without extensive calibration. First, an audio synthesis-recognition pipeline based on micro-contact dynamics simulation is constructed to generate modal audio synthesis from different materials and physical properties. Then the accuracy and effectiveness of the synthetic audio are verified by mixing the synthetic audio with real audio proportionally as the training sets. Finally, a series of desktop office applications are developed to demonstrate the application potential of AudioGest's scalability and versatility in VR scenarios.
AB - Current virtual reality (VR) system takes gesture interaction based on camera, handle and touch screen as one of the mainstream interaction methods, which can provide accurate gesture input for it. However, limited by application forms and the volume of devices, these methods cannot extend the interaction area to such surfaces as walls and tables. To address the above challenge, we propose AudioGest, a portable, plug-and-play system that detects the audio signal generated by finger tapping and sliding on the surface through a set of microphone devices without extensive calibration. First, an audio synthesis-recognition pipeline based on micro-contact dynamics simulation is constructed to generate modal audio synthesis from different materials and physical properties. Then the accuracy and effectiveness of the synthetic audio are verified by mixing the synthetic audio with real audio proportionally as the training sets. Finally, a series of desktop office applications are developed to demonstrate the application potential of AudioGest's scalability and versatility in VR scenarios.
KW - Audio synthesis
KW - gesture interaction
KW - human computer interaction
KW - virtual reality
UR - http://www.scopus.com/inward/record.url?scp=85192730598&partnerID=8YFLogxK
U2 - 10.1109/TVCG.2024.3397868
DO - 10.1109/TVCG.2024.3397868
M3 - Article
AN - SCOPUS:85192730598
SN - 1077-2626
VL - 31
SP - 1569
EP - 1581
JO - IEEE Transactions on Visualization and Computer Graphics
JF - IEEE Transactions on Visualization and Computer Graphics
IS - 2
ER -