What it does
SLTranslate is a mobile app designed to help overcome the communication barrier between those fluent in American Sign Language and those who don’t know fingerspelling. Once a user opens SLTranslate, they are greeted with a home screen in which they are able to navigate to the Translate screen. The translate screen features a camera with a status bar for the currently detected word and a section for the full translated transcript. Once the user signs a letter, for example, an “a”, the app’s machine learning model will recognize the letter and set a timer. Half a second later, if the letter detected is the same as the letter first detected half a second ago, the current letter is added to the transcript. This method of character detection prevents the application from adding accidental letters from the user that they did not mean to sign.
Additionally, users are able to delete transcripts to sign new sentences as well!
In order to make this possible, the machine learning model behind SLTranslate was trained on the 87,000 images within a Kaggle dataset. All letters from the alphabet (A-Z) and spaces are all able to be signed within SLTranslate.
How we built it
SLTranslate's machine learning model was trained on a K-Nearest Neighbors algorithm of over 80,000 images from Kaggle. This model was exported to a TensorFlow model and imported into our React Native project.
The mobile application itself was built with Expo, an open-source platform for making universal native apps for Android and iOS.
Challenges we ran into
- For most of our teammates (Eric, Jacob, Ganning, and Jolie), this was our first time using React and React Native
- It was difficult to adapt a machine learning model to be used with a continuous camera stream. We had to figure out how to preprocess the raw data from the phone’s camera into the specific format that our app’s Tensorflow machine learning model would accept, and since the machine learning model lacked documentation, we had to read through the source code of the machine learning library that we were using to reverse engineer the data format that was needed.
Accomplishments that we're proud of
- Being able to create our first android application
- Creating an app that is able to bridge the communication gap for anywhere from 250,000 to 500,000 people
- Creating a minimalistic and interactive user interface
What's next for SLTranslate
If we were to create an updated version of SLTranslate, we would love to improve not only the user experience but also have parts of the app to help make American Sign Language more accessible for everyone.
For example, our machine learning model currently only has an accuracy of around 80% for most characters, but in updated versions, we would like to build a convolutional neural network through TensorFlow to improve letter recognition. Additionally, according to a professional sign language interpreter, sign language incorporates reading many facial expressions. While SLTranslate does not currently rely on this to recognize characters, it would be helpful to use the user’s expressions in order to more accurately detect which letters or words they are signing.
Furthermore, we would like to create our own learning management system within SLTranslate. For example, there would be short videos demonstrating how to sign each individual letter, paired along with an “assessment” of the user signing the letter. If SLTranslate is able to recognize the letter the user signs as the correct one, the user is able to move on to the next character. This feature would allow users to learn Sign Language without the need of a teacher, and on their own time, further helping us break down communication barriers.
Moreover, users are also able to sign both words and letters in American Sign Language. SLTranslate only detects letters currently. By adding the ability to detect words, SLTranslate would not only be more efficient in translating but also more useful in a real-world situation.
Log in or sign up for Devpost to join the conversation.