SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS

Authors

  • DR.N.SREEKANTH
  • PRACHI
  • K.SAI KIRTHANA

DOI:

https://doi.org/10.46243/jst.2023.v8.i07.pp47-57

Keywords:

.

Abstract

Sign Language Recognition (SLR) targets on interpreting the sign language into text or speech, so as to facilitate the communication between deaf-mute people and ordinary people. This task has broad social impact, but is still very challenging due to the complexity and large variations in hand actions. Existing methods for SLR use hand-crafted features to describe sign language motion and build classification models based on those features. However, it is difficult to design reliable features to adapt to the large variations of hand gestures. To approach this problem, we propose a novel convolutional neural network (CNN) which extracts discriminative spatial-temporal features from raw video stream automatically without any prior knowledge, avoiding designing features. To boost the performance, multi-channels of video streams, including color information, depth clue, and body joint positions, are used as input to the CNN in order to integrate color, depth and trajectory information. We validate the proposed model on a real dataset collected with Microsoft Kinect and demonstrate its effectiveness over the traditional approaches based on hand-crafted features

Downloads

Published

2023-07-24

How to Cite

DR.N.SREEKANTH, PRACHI, & K.SAI KIRTHANA. (2023). SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORKS. Journal of Science & Technology (JST), 8(7), 47–57. https://doi.org/10.46243/jst.2023.v8.i07.pp47-57