HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Context Matters: Self-Attention for Sign Language Recognition

Fares Ben Slimane Mohamed Bouguessa

Context Matters: Self-Attention for Sign Language Recognition

Abstract

This paper proposes an attentional network for the task of Continuous Sign Language Recognition. The proposed approach exploits co-independent streams of data to model the sign language modalities. These different channels of information can share a complex temporal structure between each other. For that reason, we apply attention to synchronize and help capture entangled dependencies between the different sign language components. Even though Sign Language is multi-channel, handshapes represent the central entities in sign interpretation. Seeing handshapes in their correct context defines the meaning of a sign. Taking that into account, we utilize the attention mechanism to efficiently aggregate the hand features with their appropriate spatio-temporal context for better sign recognition. We found that by doing so the model is able to identify the essential Sign Language components that revolve around the dominant hand and the face areas. We test our model on the benchmark dataset RWTH-PHOENIX-Weather 2014, yielding competitive results.

Code Repositories

faresbs/slrt
Official
pytorch
Mentioned in GitHub
faresbs/san
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
sign-language-recognition-on-rwth-phoenixSAN
Word Error Rate (WER): 29.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Context Matters: Self-Attention for Sign Language Recognition | Papers | HyperAI