The proposed model has been trained and tested on video clips of dynamic signs and achieved the training accuracy of 70%.
Sign language is a means of communication utilising manual gestures (movement of hands and wrists) and non-manual gestures (expressions of face and body language). There are many different sign languages in the world, each with its own collection of words and signs. This study focuses on the implementation of Indian Sign Language Recognition System (ISLRS) which help deaf people to communicate with other persons. In this paper, the model based on Sign Language Recognition (SLR) of dynamic signs using Convolutional Neural Network (CNN) is proposed. The proposed model has been trained and tested on video clips of dynamic signs and achieved the training accuracy of 70%. This study should serve as a road map for users to use in deciding which model to implement and laying a foundation for future research and enhancing model accuracy, allowing the sign language community to communicate and share their ideas more effectively. This work is also used to overcome the educational gap between hearing impaired persons.