login
Home / Papers / Spatial-Temporal Multi-Cue Network for Sign Language Recognition and Translation

Spatial-Temporal Multi-Cue Network for Sign Language Recognition and Translation

155 Citations2021
Hao Zhou, Wengang Zhou, Yun Zhou

A spatial-temporal multi-cue (STMC) network is proposed to solve the vision-based sequence learning problem in video-based sign language understanding and achieves new state-of-the-art performance on all three benchmarks.

Abstract

Despite the recent success of deep learning in video-related tasks, deep models typically focus on the most discriminative features, ignoring other potentially non-trivial and informative contents. Such characteristic heavily constrains their capability to learn implicit visual grammars in sign videos behind the collaboration of different visual cues (i.e., hand shape, facial expression and body posture). To this end, we approach video-based sign language understanding with multi-cue learning and propose a spatial-temporal multi-cue (STMC) network to solve the vision-based sequence learning problem. Our STMC network consists of a spatial multi-cue (SMC) module and a temporal multi-cue (TMC) module. The SMC module learns to spatial representation of different cues with a self-contained pose estimation branch. The TMC module models temporal corrections from intra-cue and inter-cue perspectives to explore the collaboration of multiple cues. A joint optimization strategy and a segmented attention mechanism are designed to make the best of multi-cue sources for SL recognition and translation. To validate the effectiveness, we perform experiments on three large-scale sign language benchmarks: PHOENIX-2014, CSL and PHOENIX-2014-T. Experimental results demonstrate that the proposed method achieves new state-of-the-art performance on all three benchmarks.

Spatial-Temporal Multi-Cue Network for Sign Language Recogni