HearASL: Your Smartphone Can Hear American Sign Language

Yusen Wang, Fan Li*, Yadong Xie, Chunhui Duan, Yu Wang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

3 引用 (Scopus)

摘要

Sign language is expressed by movements of the hands and facial expressions, which is mainly used by the deaf community. Although some gesture recognition methods are put forward, they possess different defects and are not applicable to deal with the sign language recognition (SLR) problem. In this article, we propose an end-to-end American SLR system with built-in speakers and microphones in smartphones, which enables SLR at both word level and sentence level. The high-level idea is to use the inaudible acoustic signal to estimate channel information and capture the sign language in real time. We use channel impulse response to represent each sign language gesture, which can realize finger-level recognition. We also pay attention to conversion movements between two words and treat them as an additional label when training the sentence-level classification model. We implement a prototype system and run a series of experiments that demonstrate the promising performance of our system. Experimental results show that our approach can achieve an accuracy of 97.2% at word-level recognition and word error rate of 0.9% at sentence-level recognition, respectively.

源语言英语
页(从-至)8839-8852
页数14
期刊IEEE Internet of Things Journal
10
10
DOI
出版状态已出版 - 15 5月 2023

指纹

探究 'HearASL: Your Smartphone Can Hear American Sign Language' 的科研主题。它们共同构成独一无二的指纹。

引用此