Accelerating African Sign Language using OpenGesture

While deaf people from different language communities can communicate with each other without difficulty in SASL (South African Sign Language), they cannot understand "sign language" interpreters unless they have been schooled in the manually coded language used by the interpreter. With the growing adoption of assistive technologies, deaf and visually impaired people need to be able to communicate naturally with their network, regardless of whether the second person has expertise on sign language, especially under cases of video consultations with their health practitioner, Educators or be it friends & family . We propose a deep neural network for the prediction of South African Sign Language(SASL) and gesture recognition to direct standard English text translation in natural video sequences using CPU. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos.

Methodology / Approach

Data Collection: It is crucial to have as much, high quality and accurate data as possible for improved model accuracy and performance, therefore training data consists two types of hand color and depth images. OpenGesture Sign Language Digits RGB + RGBD Dataset. Collected using Intel Real Sence D435. The dataset has ten digits (0-9) sign language classes. Each gesture is repeated 30 times by two independent sample agents.

Download OpenGesture Depth and Color Dataset:

Download OpenGesture Color(RGB) Dataset:

Deep Learning Model Training: We apply transfer learning technique to retrain a model that has already been trained on a related task and reusing it train a custom deep learning model to recognize Sign Language Digits. OpenGesture image recognition model is built using Tensorflow and Keras for classifying ten signs of gestures by using the computational power of AWS EC2 DL1 instances powered by Gaudi AI Processors / Accelerators.

To retrain the OpenGesture please clone the OpenGesture for Habana repository on Github:

Use Jupyter notebook to open the Image Classification Custom Model OpenGesture4Habana

Built With

Share this project: