Inspiration

I have been very close to my nanny since I was just a baby. And from then, I‘ve seen her going from a completely healthy person to slowly lose her hearing. She always carries a notepad with her just so others can write just to “talk” with her.

If only there was a better way.

1 in 6 of adults in the UK alone is affected by hearing loss. But since much of the population don’t know how to use sign language, it’s difficult for people with hearing loss to take in information quickly. Daily tasks that involve interacting with others, such as going to the doctor, can be especially overwhelming with so many technical terms.

What it does

I created Call4All app that allows video calls with automated sign language support. It can be used for many scenarios: e-meeting family and friends, receiving doctor consultations, and much more.

How I built it

  1. The app has speech recognition converting speaker audio into text.
  2. From there, the sentence is processed with Natural Language Tool Kit ( NLTK ) to fit with American Sign Language (ASL) sentence structure, omitting stop words, identifiers (ie. a, the, to, etc.).
  3. Via an open source library, WLASL, the app returns demo videos of sign language hat correspond to the input sentence.

The end result is a video call app with speech-to-text-to-ASL support. Now the deaf community can feel less intimidated connecting with others.

Challenges I ran into

The biggest challenge was to make the app work in real time, which the Agora SDK/UIkit helped tremendously. Translating sentences from plain English to ASL sentence structure remains difficult since it requires lots of Natural Language Processing. The current version using NLTK is good, but doesn't “clean” sentences as it should be every time. If there’s more time, I’d definitely continue working on perfecting this.

What I learned

I learned to create a real time synchronous Django app, with the help of Agora. I integrated different libraries to make a fluid experience for deaf user.

What's next for Call4All

As the name suggests, meeting app should accommodate all - not only people with hearing difficulties, but I also want to extend its usefulness to visual impaired and muted communities. This can be done with machine learning and OpenCV to detect muted people's sign language communition.

Share this project:

Updates