Signall

Inspiration

As this is our first hackathon, we wanted to go big or go home. Our team wanted to develop something that could improve people's lives, even if it meant dealing with new technologies and languages. Communication between people with hearing disabilities and the rest of the population has proven difficult. There is no doubt that a disconnect exists between those two groups and with this app/product we are trying to facilitate and encourage a stronger connection and easier communication for both sides. For that reason, we decided to develop Signall as a simple app that could easily be used by anyone attempting to communicate with someone with a hearing disability or just trying to learn about American Sign Language (ASL) and the community that surrounds it.

What it does

Signall receives camera input to capture ASL signs. Using a machine learning model that we trained, we have been able to recognize the entire ASL alphabet. The letters transcribed by the model are vocalized back to the user using a Text-To-Speech API. Additionally, it allows for users to convey spoken words to an ASL signed alphabet using a Speech-To-Text API and our own Text-To-ASL algorithms.

How we built it

Signall is built on the flexible flutter framework written in dart. Developed by Google, Flutter and Dart allow for accessible deployment to iOS and Android platforms. While we attempted to use our own TensorFlow implementation to train our ASL model, there proved to be complications, so we used Google's Teachable Machine website to generate a lightweight machine learning model based on a limited data set. To integrate it into our dart app, we used a Firebase ML Kit library to have our custom model. Signall also uses libraries for Speech-To-Text and Text-To-Speech to read in and vocalize user input.

Challenges we ran into

There were many issues with integrating multiple APIs and libraries into dart and flutter, as it is a relatively new development platform. Hopefully, there will be more support for dart in the future. Additionally, we had trouble finding a diverse data set for the American Sign Language alphabet, although there was a large source, it lacked significant variability to make the model more generalized; therefore, we had to use our own signs as data. We also lacked the time and resources to efficiently fine-tune the model. Ultimately, we learned from facing these challenges, and we'll have more experience in how to approach these issues in the future.

Accomplishments that we're proud of

This whole project has been a huge learning experience for us all. We each had to learn different technologies to bring this overreaching project together, collaborating with one another and proving to be a rewarding bonding experience. From learning ASL to Machine Learning, there was a lot that went into this project and a lot that we'll be taking home with us. Overall, the entire event was filled with fun and learning, and despite our lack of sleep, we're proud of being able to implement this great idea.

What's next for Signall

Ideally, we would like to develop this project to take it to its full potential - there is much to be improved. For example, it would be ideal to train a robust model, not only capable of learning the ASL alphabet but also to implement video training for gestures of common words and phrases in ASL. We hope to push our app to platforms like the App Store and the Play Store for free. The main goal of the product was not for monetary gain; we know it can be beneficial to a lot of people so we want to make it as available as possible.

Built With

Share this project:
×

Updates