Home / Papers / A Survey of Sign Language Recognition

A Survey of Sign Language Recognition

4 Citations2023
Vaishnavi Karanjkar, Rutuja Bagul, Raj Ranjan Singh
INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT

The use of deep learning for sign language recognition is discussed, where the model will learn to detect the hand motions images throughout an epoch, using Deep Learning Computer Vision to recognize the hand gestures.

Abstract

Sign Language is mainly used by deaf (hard hearing) and dumb people to exchange information between their own community and with other people. It is a language where people use their hand gestures to communicate as they can't speak or hear. The goal of sign language recognition (SLR) is to identify acquired hand motions and to continue until related hand gestures are translated into text and speech. Here, static and dynamic hand gestures for sign language can be distinguished. The human community values both types of recognition, even if static hand gesture recognition is easier than dynamic hand gesture recognition. By creating Deep Neural Network designs (Convolution Neural Network designs), where the model will learn to detect the hand motions images throughout an epoch, we are using Deep Learning Computer Vision to recognize the hand gestures. After the model successfully recognizes the motion, an English text file is created that can subsequently be translated to speech. The user can choose from a variety of translations for this paragraph. This application can be used without an internet connection and is entirely offline. With this model's improved efficiency, communication will be easier for the deaf (hard of hearing) and disabled people. We shall discuss the use of deep learning for sign language recognition in this paper. Key Words: sign language, convolutional neural network, computer vision.