Abstract
Sign language is expressed by movements of the hands and facial expressions, which is mainly used by the deaf community. Although some gesture recognition methods are put forward, they possess different defects and are not applicable to deal with the sign language recognition (SLR) problem. In this article, we propose an end-to-end American SLR system with built-in speakers and microphones in smartphones, which enables SLR at both word level and sentence level. The high-level idea is to use the inaudible acoustic signal to estimate channel information and capture the sign language in real time. We use channel impulse response to represent each sign language gesture, which can realize finger-level recognition. We also pay attention to conversion movements between two words and treat them as an additional label when training the sentence-level classification model. We implement a prototype system and run a series of experiments that demonstrate the promising performance of our system. Experimental results show that our approach can achieve an accuracy of 97.2% at word-level recognition and word error rate of 0.9% at sentence-level recognition, respectively.
Original language | English |
---|---|
Pages (from-to) | 8839-8852 |
Number of pages | 14 |
Journal | IEEE Internet of Things Journal |
Volume | 10 |
Issue number | 10 |
DOIs | |
Publication status | Published - 15 May 2023 |
Keywords
- Acoustic sensing
- American sign language (ASL)
- mobile computing