Inspiration

The members of our team have recently taken a sign language course and want to practice what they learned.

What it does and how we built it

We used Google's tensorflow library to retrain an image classification model to detect ASL letters from pictures of hands. We then created a graphical user interface with PyGame which asks users to spell out a word in ASL and checks if their sign is correct.

Challenges we ran into

We used Google Images results to train our model. However, we found many of the image results were unrelated to our desired letter. This narrowed our dataset of images quite a bit below our target of 100. We also ran into trouble correcting the many syntax issues of Python.

Accomplishments that we're proud of

We learned a bit more about machine learning and practiced programming in Python

What we learned

tensorflow, pygame

What's next for Time to Sign

We hope to retrain the model to recognize basic words of ASL so users can practice signing with a broader range of vocabulary. In the future, we also hope that Time to Sign can help interpret ASL to people who are unfamiliar with the language.

Built With

Share this project:

Updates