Inspiration

Although we have come a long way in computer vision, we are yet to make a model that predicts sign language with a good enough accuracy. Sign language involves more of emotion and it depends on feedback to convey a message. Movements with slight changes can become a hindrance when it comes to training models. Here, I present a model that can take in images of ASL alphabets and predict real-time using OpenCV.

What it does

Trained on ResNet50 architecture it is successfully able to classify the 26 different letters of the ASL alphabet(excluding sings that require motion). I am providing a video footage of a real time classification from a YouTube video that demonstrates the Alphabets.

How I built it

Leveraging fast.ai's libraries, OpenCV's image detection system, Kaggle's free and open datasets and Kaggle's Notebook environment.

Accomplishments that I'm proud of

A validation accuracy of 99.98% and successfully detecting and predicting the alphabets real-time.

What's next for ASL alphabet classifier using fastai and OpenCV

Try and integrate the same model into real-time alphabet classification using the feed from a webcam. Make a model that can successfully convert image sequence into words.

Built With

Share this project:

Updates