Inspiration

My grandmother recently lost her hearing eight months ago, and is unable to communicate without sign language. She recently came to the US from India three days ago to visit us, but she is unable to communicate with us since we do not know sign language. After seeing the frustration on my father's face as he failed to use sign language, I decided to alleviate him and others in a similar situation by building a real time translator.

What it does

This app will have two main functionalities. First it will take live video of sign language and convert it to english characters that the user can read and understand. The second functionality is taking text and converting it to speech so that the hearing impaired can read it. This will allow for two way communication between the deaf and the people who do not know sign language.

How we built it

We built it using a tensor flow model that was in built into a Swift IOS app. Then we trained a CoreML model on thousands of hand signs which would then return the character of the hand sign. We integrated this model into the iOS app and displayed the characters on the the screen. We also used the Text to Speech api that Apple provides to recognize speech and allow the deaf to read it on the screen.

Challenges we ran into

The main challenge we ran into was that our training data was too specific to the trainer's hand and did not detect the correct sign on other people's hand since it was not trained on it. This lead to a decrease in accuracy in our app which means that our translator is not as accurate as it would ideally be. To alleviate this we retrained the data but ran less epochs to not overtrain, and this did increase our accuracy albeit by a small percentage.

Accomplishments that we're proud of

We are proud of implementing two different Machine Learning models into a single iOS app: one that would detect a hand and another that would convert the hand sign into a english character. This was very daunting for our team since it was the first time we have implemented ML models into a iOS app, but we persevered and able to teach ourselves using TensorFlow documentation and the help of our mentors.

What we learned

What's next for Sign Language Translator

The next thing that we will accomplish is to create a more robust dataset that will allow our translator to translate a greater variety of hand signs and improve the accuracy of the current hand signs. We will also love to use our speech to text to convert to hand signs that the user can sign to the hearing impaired instead of them reading it off the screen. We would also like to improve the UI.

Built With

Share this project:

Updates