When I was in school, there was a friend of mine who started suffering with bilateral hearing loss. She struggled to do basic things like reading in the class. It took 8 years of therapy to speak well. Although she has no speech discrimination ability, but she can hear some sounds. As I grew up, this topic drew my attention. As I researched more about this, there were many facts which were horrifying. It’s when I realized that special people could make use of technology to improve this situation.

What it does

There are two types of output once we play a video on our application.

  1. It will show the subtitles of YouTube content in text format.
  2. It displays American Sign Language (ASL) of the respective content.

How we built it

We are using Streamlite to build interactive front end. Python code fetches the video URL, given by the user. It extracts the subtitles and converts them into American Sign Language (ASL). We deployed our application on Azure and Google Cloud. We also maintained a collaborative GitHub Global Campus history.

Challenges we ran into

We had to troubleshoot the deployment issues we faced in Azure and we were successfully able to overcome it. Web app is also deployed on Google Cloud Platform with the help of docker. We faced challenges while dockerizing the web app.

Accomplishments that we're proud of

We are proud of doing our little bit of contribution to the society and make the deaf's people life better. We are also able to help educators and parents for their training purpose. Technically, we can give a demo of the application from front-to-end as well.

What we learned

We learnt to use Streamlit as front end to build web app efficiently. We learnt code versioning of the project files with Github. We also learnt cloud technologies like Google Cloud and Azure to deploy our web app.

What's next for Sign Speller

As they say, there's always a room for improvements, we see below opportunities to improve Sign Speller:

  1. Dynamic Streaming of Sign language with Video streaming.
  2. Planning to develop mobile app.
  3. We will optimize the back end efficiency with the help of Large Language Models (LLMs).
  4. We are planning to develop a sign language chat-bot.
  5. We will add animated hand gestures to improve interaction.

Built With

Share this project: