Evaluating multi-channel interaction design for enhancing Pose accuracy in yoga training among visually impaired individuals

Xiaohan Zhu, Xuandong Zhao, Jianming Yang, Bowen Sun*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Aim: Physical exercise is essential for the physical and mental health of visually impaired people, but they often face challenges such as inaccurate movements, lack of rhythm and difficulty in mastering postures during exercise. This project introduces an assistive device based on a multi-channel interaction design strategy to improve the accuracy of yoga practice for the visually impaired and to enable their independent exercise. Methods: The system uses a 1:1 model combined with an output interaction model. The effectiveness was verified through controlled experiments with unassisted exercise as the control group and yoga-assisted exercise as the experimental group. Improvements in yoga accuracy and product usability were verified using the Assisted Accuracy Scale and the SUS Scale, respectively. Results: The results showed that the multi-channel interaction design significantly improved the accuracy and usability of yoga exercises and enhanced the ability of visually impaired people to exercise independently. Conclusion: Through this project, we hope to replicate this design strategy to help more visually impaired individuals independently perform effective physical exercise at home, in a gym, or in an outdoor space, thereby improving their quality of life and overall health.

Original languageEnglish
JournalDisability and Rehabilitation: Assistive Technology
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • action accuracy
  • design and research
  • Multisensory channels
  • visually impaired groups
  • yoga assistance

Fingerprint

Dive into the research topics of 'Evaluating multi-channel interaction design for enhancing Pose accuracy in yoga training among visually impaired individuals'. Together they form a unique fingerprint.

Cite this