There's yet to be Sign Language translation built-ins for Google Translate or any social media platforms such as Snap or Facebook.
What it does
SignTranslate is an Android camera app that enable real-time communications for English speaker and Sign Language speaker. It translates one Sign Language character at a time and compose a word, or a phrase. We plan to integrate the tool with user's daily routines, as well as social media platforms such as Snapchat.
How we built it
We used Android Studio for the picture/video recording and aim to use Google Cloud TPU for machine learning model. We used Google Cloud Storage Bucket to store training data and models.
Challenges we ran into
It's really hard to properly configure the dependencies on GCP TPU as there is little existing/preconfigured image that we can use. We also had issues with changes made to the tensorflow structure that affected the tf and keras dependencies.
Accomplishments that we're proud of
We made the camera and cloud storage working, and we are more familiar with Docker and setting up Cloud TPU now.
What's next for SignTranslate
We want continuous classification and translation for gestures, but of course we need to first get the character-by-character version working. We'll play with different machine learning models once we have the TPU setup (also we need to factor in how much budget we need to get to a reasonable error rate for daily communications)