My group has been learning machine learning through various courses online. We wanted to try and implement this into a hackathon, and we decided that creating a neural network which could translate sign language into English text would be the best challenge.

What it does

This translator can convert sign language into English text. The website takes a photo with a regular interval which is then passed through a neural net which determines the letter. The only two letter that can not be produced are j and z which require motion to show. If the website takes a photo of an empty wall it will represent it as a space.

How I built it

The machine learning aspect of this project was done using tensorflow and keras. For the dataset, we used a combination of datasets we found online and some images we added ourselves to further interpret an accurate result. The neural net used thousands of images to train and validate.

The Django was used as the backend for this project. The front end application sends AJAX requests to the Django server with the image data, and calls the script that runs the prediction. The result is then returned to web page to be displayed.

The camera was used as an input to obtain images over a set delay. The images are fed to a javascript function which would then draw out the result onto the canvas on the web application. The image is then interpreted by the neural network which determines what english letter is being conveyed and then displayed on the web application.

Challenges I ran into

The initial data set obtained from a public library online had images that lacked resolution which our neural network had complications in differentiating between different sign gestures. The solution was to procure a new data set involving a superior collection of images obtained from both another source and later our own data set to further refine the neural network with both higher resolution images and clearer gestures.

Accomplishments that I'm proud of

We have developed neural network and have adapted our knowledge to solve a problem that is felt by other users of sign language when communicating with others.

What I learned

We experienced the difficulties of making a consistent neural network applied to a new situation.

What's next for Sign Slate

We want to further improve the neural network with better algorithms, layers, and data. This would further improve the speed and accuracy in obtaining the result. Another goal would be to make a mobile application for travel use as a connection may not be available at all times.

Built With

Share this project: