Inspiration

We wanted to create an interactive platform for people to learn and practice sign language.

What it does

It allows users to type by holding sign language letters in front of a camera. It also allows users to control their mouse and draw on-screen by holding up objects in front of the camera

How we built it

We made use of Tkinter for the GUI, OpenCV for the image processing, and TensorFlow for the convolutional neural network for recognizing the sign language characters.

Challenges we ran into

Separating the hand from the background image was difficult and improper lighting made it very difficult.

Accomplishments that we're proud of

We're proud to have accomplished making this project.

What we learned

Plan ahead of time.

What's next for VizSign

We hope to improve this so that it can integrate better with general use.

Built With

Share this project:

Updates