HearASL: Your Smartphone Can Hear American Sign Language

Yusen Wang, Fan Li*, Yadong Xie, Chunhui Duan, Yu Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Sign language is expressed by movements of the hands and facial expressions, which is mainly used by the deaf community. Although some gesture recognition methods are put forward, they possess different defects and are not applicable to deal with the sign language recognition (SLR) problem. In this article, we propose an end-to-end American SLR system with built-in speakers and microphones in smartphones, which enables SLR at both word level and sentence level. The high-level idea is to use the inaudible acoustic signal to estimate channel information and capture the sign language in real time. We use channel impulse response to represent each sign language gesture, which can realize finger-level recognition. We also pay attention to conversion movements between two words and treat them as an additional label when training the sentence-level classification model. We implement a prototype system and run a series of experiments that demonstrate the promising performance of our system. Experimental results show that our approach can achieve an accuracy of 97.2% at word-level recognition and word error rate of 0.9% at sentence-level recognition, respectively.

Original languageEnglish
Pages (from-to)8839-8852
Number of pages14
JournalIEEE Internet of Things Journal
Volume10
Issue number10
DOIs
Publication statusPublished - 15 May 2023

Keywords

  • Acoustic sensing
  • American sign language (ASL)
  • mobile computing

Fingerprint

Dive into the research topics of 'HearASL: Your Smartphone Can Hear American Sign Language'. Together they form a unique fingerprint.

Cite this