This work is using Deep Learning Models to enhancing Realtime sign detection and recognition and it helps the deaf and dumb people to connect with the world.
The real-time sign language recognition system is developed for recognizing the gestures of Indian Sign Language (ISL). Generally, sign languages consist of hand gestures. For recognizing the signs, the Regions of Interest (ROI) are identified and tracked using the skin segmentation feature of OpenCV. Then by using [1] Media Pipe, it captures the landmarks of the hands and the key points of landmarks are stored in an NumPy array. Then we can train the model on it by using TensorFlow, Keras and LSTM. Lastly the model can be tested Realtime by taking live feed from the webcam.Realtime Sign Detection and Recognition is one of the potential applications for the deaf and dumb people as it help them to connect with the world. Previous approaches to sign detection and recognition were done by using the Machine Learning Algorithm by training it on the images but now we are using Deep Learning Models to enhancing Realtime sign detection and recognition.