A Sensing Data and Deep Learning-Based Sign Language Recognition Approach

30 Pages Posted: 16 Nov 2023

See all articles by Wei Hao

Wei Hao

China Agricultural University

Chen Hou

China Agricultural University

Zhihao Zhang

affiliation not provided to SSRN

Xueyu Zhai

affiliation not provided to SSRN

Li Wang

affiliation not provided to SSRN

Guanghao Lv

affiliation not provided to SSRN

Abstract

Correct sign language recognition helps deaf people to communicate normally with hearing people. Recent research has focused on sensing data-based sign language recognition. Most of them use traditional machine learning approaches, which cannot extract the spatio-temporal features of sign language gestures well. Although deep learning has been used to recognize dynamic sign language gestures, the accuracy still needs to be improved. In this paper, we propose an approach that combines multiple networks to enhance real-time accurate sign language recognition. Firstly, we design a one-dimensional convolutional neural network (CNN) by leveraging the skip connection approach, which improves the data feature extraction ability of the neural network by integrating the feature results of different layers. Secondly, we propose an improved multi-head attention mechanism that incorporates bi-directional long short-term memory (BiLSTM) networks within a multi-head attention mechanism to capture the internal temporal properties in sign language and further extract the crucial motion features. Thirdly, we position this improved multi-head attention mechanism behind the final convolutional layer of the proposed CNN architecture and denote the resultant architecture as BMCNN. Finally, we verify the performance of our approach with BMCNN architecture through ten-fold cross-validation on the sensing dataset. Our approach achieves an accuracy of 99.44% in sign language recognition. The results show that our proposed approach outperforms the traditional machine learning approaches and other state-of-the-art techniques in this field.

Keywords: Sign language recognition, Sensing data, Skip connection CNN, Multi-head attention mechanism, BiLSTM

Suggested Citation

Hao, Wei and Hou, Chen and Zhang, Zhihao and Zhai, Xueyu and Wang, Li and Lv, Guanghao, A Sensing Data and Deep Learning-Based Sign Language Recognition Approach. Available at SSRN: https://ssrn.com/abstract=4635330 or http://dx.doi.org/10.2139/ssrn.4635330

Wei Hao

China Agricultural University ( email )

Beijing
China

Chen Hou (Contact Author)

China Agricultural University ( email )

Beijing
China

Zhihao Zhang

affiliation not provided to SSRN ( email )

No Address Available

Xueyu Zhai

affiliation not provided to SSRN ( email )

No Address Available

Li Wang

affiliation not provided to SSRN ( email )

No Address Available

Guanghao Lv

affiliation not provided to SSRN ( email )

No Address Available

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
37
Abstract Views
160
PlumX Metrics