Inspiration
We recognized the difficulty deaf individuals face in public or social settings in communicating with others who are not hard of hearing. Our app allows for smoother communication in these settings, as users can simply sign into their phones and have spoken English be read out.
What it does
Allows users to translate American Sign Language into spoken words from their phone, wherever they are.
How we built it
The frontend was built with pure HTML/CSS. This means each display was put by hand without the aid of preexisting UI functions. The sign recognition was built in javascript with ML5 which can determine 21 joints on the hand live. By using a center of mass algorithm (denoted by the red dot), we were able to find critical distances from the joint to the CoM. Using these distances, we handcrafted a regression model to calculate error from optimal positions (which we determined through trial and error).
Challenges we ran into
We switched our project idea multiple times because our previous ideas were not feasible, so we were extremely limited on time.
Accomplishments that we're proud of
Getting the alphabet recognition and translation to work. We were shocked to find that the model actually worked on our first try!
What we learned
How to work with ML5/javascript for machine learning.
What's next for SignSync
Expanding the sign recognition beyond the ASL alphabet, to other words.
Log in or sign up for Devpost to join the conversation.