Inspiration

We wanted to create a piece of software that would improve communication and accessibility.

What it does

pySign uses a convolutional neural network to classify the letters of the American Sign Language alphabet, and uses a web interface to provide live translation via webcam.

How we built it

We used pytorch to train the neural network, flask to make a small python server that could handle requests from the web frontend, and html/css/js to make a small web app that communicates with the server.

Challenges we ran into

We found certain pretrained models that we were attempting to use to simplify training for our CNN more difficult to implement, and we found that some models either trained far too slowly to be useful or simply could not be trained to be more effective.

Accomplishments that we're proud of

We're proud of training a neural network that can identify inputs with reasonable accuracy, and of being able to create a finished product in such little time.

What we learned

We learned a great deal about the computational and theoretical complexity of machine learning. We also learned about connecting programs to web applications.

What's next for pySign

pySign's future involves finding a more effective model to use for identifying ASL and a more permanent website to be served from. In the further future, we might look to teach pySign to recognize live video, and to teach it more signs that just the ASL alphabet.

Share this project:

Updates