One of our team members has a friend who uses sign language. She would like to learn how to communicate in sign language in order to understand her friend.

What it does

SignPi is a project that utilizes RaspberryPi 3 in order to enable sign language communication between a user that does not understand sign language and another user that does. The primary user will wear the device with the camera pointing towards the other user who is communicating in sign language. The device will captures the gestures and read it out for the primary user.

How we built it

This project is built using RaspberryPi 3 with a camera sensor that captures sign language gestures. The gestures then is send to a Flask server hosted in Google Cloud. Using Tensorflow as back-end and Keras as model processor, the server will translate the gestures to sentences to be read out loud by the RaspberryPi.

Challenges we ran into

For some of us, this is our first time doing a machine learning project. We also had trouble deploying to our Google Cloud server at first because we were using the wrong settings.

Accomplishments that we're proud of

We are able to record the gestures using RaspberryPi and send it to the Google Cloud server. We also trained the machine learning model to predict the gestures correctly.

What we learned

We learned a lot about machine learning concepts and how to implement it.

What's next for SignPi

We would like to have our model be accurate and translate not only sign language letters, but also words.

Built With

Share this project: