This thesis aims to recognize sign language and focus specially on the gestures performed by the deaf and dumb persons in a multi-modal context and its utility is justified by the large number of the targeted population.
Human-action recognition is still an open research field in the computer vision community. The actual orientation leans towards the description from more than one modality combined with massive learning strategies [1] and the introduction of Microsoft’s Kinect sensor accelerated this progress. We aim in our thesis to recognize sign language and focus specially on the gestures performed by the deaf and dumb persons in a multi-modal context. For them who cannot speak, their unique communication solution is through sign language. The utility of our work is justified by the large number of the targeted population. According to the world federation of the deaf, there are over 70 million deaf people around the world. This increasing population uses 137 different sign languages. For speaking and reading persons, we are able to use voice recognition services or text translation in order to allow the interaction with strangers or foreigners. But until now, there is no service capable of recognizing sign language actions. Such applications are still within science fiction.