In the UK 11 million people are hard of hearing, but only an estimated 150,000 people use sign language, we aim to improve people's knowledge of sign language by teaching it them in an interactive way using machine learning.

What it does

The system gives you a letter and waits for you to do the sign (in ASL) for that letter on the webcam and when you are correct increases your score and moves onto the next letter.

How I built it

It uses Python 3.5 with tensorflow 1.2.1, matplotlib 2.0.2, numpy 1.13.0, opencv-python Could find no pre-trained dataset so it uses the dataset and some of the code provided in with some refinements added and also the functionally for the test added.

Challenges I ran into

Due to no pre-trained data sets being available it took over 8 hours to run the training script which didn't finish until 2am. The code from the github repository that we used was very buggy (didn't start without many changes being made) and had accuracy issues. Had to spend a lot of time debugging the provided code and then trying to refine the accuracy.

Accomplishments that I'm proud of

Took a complicated code that we had little knowledge of the background information and understood it well enough in order to debug and refine the code.

What I learned

Knowledge of sign language and the different sign languages used around the world, learnt a lot about machine learning, in particular convolutional neural networks and gained a lot of experience using OpenCV.

What's next for Sign Language Test Using Machine Learning

Could be implemented into a web application or mobile application in order to improve accessibility and usability. Users could create accounts and have their results scored so they can monitor their learning progress. Could also implement an interactive way for them to practice separate to the test. Training set could also be improved to improve accuracy and potentially include common phrases rather than just letters.

Built With

Share this project: