Overview

People today are as connected as they've ever been, but there are still obstacles in communication, particularly for people who are deaf/mute and can not communicate by speaking. Our app allows bi-directional communication between people who use sign language and those who speak.

You can use your device's camera to talk using ASL, and our app will convert it to text for the other person to view. Conversely, you can also use your microphone to record your audio which is converted into text for the other person to read.

How we built it

We used OpenCV and Tensorflow to build the Sign to Text functionality, using over 2500 frames to train our model. For the Text to Sign functionality, we used AssemblyAI to convert audio files to transcripts. Both of these functions are written in Python, and our backend server uses Flask to make them accessible to the frontend.

For the frontend, we used React (JS) and MaterialUI to create a visual and accessible way for users to communicate.

Challenges we ran into

  • We had to re-train our models multiple times to get them to work well enough.
  • We switched from running our applications entirely on Jupyter (using Anvil) to a React App last-minute

Accomplishments that we're proud of

  • Using so many tools, languages and frameworks at once, and making them work together :D
  • submitting on time (I hope? 😬)

What's next for SignTube

  • Add more signs!
  • Use AssemblyAI's real-time API for more streamlined communication
  • Incorporate account functionality + storage of videos

Built With

Share this project:

Updates