login
Home / Papers / Hand Gesture Recognition for Sign Language Using 3DCNN

Hand Gesture Recognition for Sign Language Using 3DCNN

204 Citations2020
Muneer Al-Hammadi, Ghulam Muhammad, Wadood Abdul

This study proposed an efficient deep convolutional neural networks approach for hand gesture recognition that employed transfer learning to beat the scarcity of a large labeled hand gesture dataset.

Abstract

Recently, automatic hand gesture recognition has gained increasing importance for two principal reasons: the growth of the deaf and hearing-impaired population, and the development of vision-based applications and touchless control on ubiquitous devices. As hand gesture recognition is at the core of sign language analysis a robust hand gesture recognition system should consider both spatial and temporal features. Unfortunately, finding discriminative spatiotemporal descriptors for a hand gesture sequence is not a trivial task. In this study, we proposed an efficient deep convolutional neural networks approach for hand gesture recognition. The proposed approach employed transfer learning to beat the scarcity of a large labeled hand gesture dataset. We evaluated it using three gesture datasets from color videos: 40, 23, and 10 classes were used from these datasets. The approach obtained recognition rates of 98.12%, 100%, and 76.67% on the three datasets, respectively for the signer-dependent mode. For the signer-independent mode, it obtained recognition rates of 84.38%, 34.9%, and 70% on the three datasets, respectively.

Hand Gesture Recognition for Sign Language Using 3DCNN