Inspiration

Imagine living in a foreign country. Think about it. How would you feel if you lived in an area where the language, customs, and culture weren't native to you? You'd probably go through each day with reluctance and uncertainty. You'd want to say what's appropriate, not something that would be viewed as ignorant. You'd feel frustrated when you want to state your opinion but couldn't make yourself understood. You'd feel isolated when everyone was laughing at a joke, and you didn't understand the punch line.

Deaf and hard of hearing people often feel this way when they're surrounded by hearing people.

But the issue has more than just cultural and social implications. There's a critical shortage of accredited translators. The demand for Sign Language Interpreters is expected to rise 46% from 2012 to 2020 according to the Bureau of Statistics. There are currently five Canadian post-secondary programs that educate interpreters and each of these programs only graduate 6-13 people a year. Some of these programs operate on a cohort systems, meaning that classes only graduate every 2-3 years.

The shortage of accredited translators means that basic democratic right, such as representing yourself in court, seeing your doctor, and finding employment are being obstructed for a disabled and often misunderstood minority.

Signatio is committed to remove such obstructions because we believe that everyone deserves to have their voice heard.

What it Does

We use machine learning and computer vision to gamify and break down the barriers of learning American Sign Language.Our web application encourages users to become more proficient in American Sign Language. Users improve their skills by completing modules and unlocking levels that increase gradually in difficulty.

We've included one level and three modules in our minimum viable product. Our first level seeks to teach users the ASL alphabet:

  • An initial module to guide the user through the entire ASL alphabet
  • A second module generates a randomized multiple choice test to help users associate the English and ASL alphabet
  • Our third and final module tests our user's mastery of the alphabet by making them sign a letter without any aid

How I Built It

We used Flask and Bootstrap to build our easy to use and interactive educational platform. Furthermore, we created a multi-class image classification model which we trained. This image classification model is utilized to recognize ASL alphabets by using the webcam and comparing to ensure the symbol matches the trained data of the model. We also have a random generator which randomly selects an ASL alphabet and has the user select which one the symbol/gesture matches. This randomly selects alphabet by utilizing a random library to randomly select a letter from list of alphabets. Once it selects, it removes the selected alphabet from the list and randomly selects two other letters and adds in multiple choice question for user to select correct corresponding alphabet given symbol. For our last module, it just randomly selects an alphabet and asks for user to show the symbol for that alphabet without any guide/aid in terms of what the hand gesture appears to be. The first module works on same idea except it also shows an image of alphabet to guide users in showing how the gesture for the alphabet appears.

Challenges I Ran Into

  • Embedding our webcam feed into the web app proved quite difficult, so instead we made separate application to feed data into our web app while simultaneously running a stream on our platform to both accurately mimic the UI off our modules and obtain accurate results
  • Matching each letter to an image proved difficult to do but we found that matching the image to the file path name was the simplest
  • Making sure duplicate options in our multiple choice was hard to do without iterating through every combination so we developed our own algorithm to do so *Routing our app was a challenge as we had to integrate our html buttons and a tags with our python back-end

Accomplishments that I'm Proud of

Our entire team worked around the clock to solve what seemed like an impossible number of technical challenges. But what we're really proud of is that we used technology to try and make a real change. Discrimination in any form closes the door to equal opportunity, a fundamental right of Canadian citizenship and democracy itself. Our team strongly believes that the culturally deaf, oral deaf, deafened, and hard of hearing have the right to fair an equitable treatment and to communicate both fully and freely. Making an effort to solve this issue was the biggest thing we accomplished.

What I Learned

We were able to learn about numerous different technologies such as Flask, Bootstrap, and the general idea of image classification models as only one developer in our group had experience with these topics. The rest had little to no experience using web frameworks as most of our experience comes from plain Javascript, libraries such as React, or frameworks such as Spring. Image classification model is a form of machine learning which we always found extremely difficult to get into, but by having online GitHub repositories in which we could base our own model of it was quite simple. We were able to take a higher level approach and not need to actually understand the details of how the model actually works. Furthermore, due to being able to work with more experienced hackers, we were all able to experience and learn from our team members previous experience in how to actually tackle a problem first and how to focus on pitch before even getting into doing the implementation. They also were able to tone down any irrational hopes and expectations for the project before starting.As hackathons are such a short period of time that developers have little to no time to implement any features especially with the key trying to be innovative but also reliable so that you can demo project without any issues.

What's Next for Signatio

There's a lot more work we plan to put into Signatio. We only managed to create one level and three modules. We had many more features and improvements we wanted to implement, here are some of them below: *Add a hangman module for level one *Implement video analysis as ASL has many gestures and movements that we currently aren't able to use since we only analyze static images *Add more level and module material (words, sentences, and topics) *Statistics page *Real time tutoring and playing with other users

  • Add spaced repetition learning to the platform
Share this project:

Updates