Inspiration

"Society has the tendency to focus on disability rather than ability" said Rachel Kolb in her Ted talk, and we are here to fix that. Ever wondered how difficult, tiring and time consuming it is for a Sign language interpreter? We provide a solution that automates this task.

What it does

A real time Speech to Cued-Speech app that makes it possible for people with partial or complete hearing loss to interpret natural language with ease and communicate fluently. Our app takes in raw input in the format of speech and displays a video of a virtual human 'cueing' the exact same words.

Why Cued Speech?

Compared to ASL, which is a language by itself, cued speech is a visual system of communication used among deaf or hard-of-hearing people. Cued Speech has different representations in the form of hand shapes for syllables in the Natural language and takes only about 20 hours to learn. Study shows that one can achieve an interpreting accuracy of ~30% with lip reading but combined with cued speech this can be increased to a mind blowing 96%.

How we built it and product workflow

We first put considerable effort into learning Cued Speech. We then wrote a python library for interpretation of Cued Speech using the CMU Dictionary for English pronunciations. Given an input sentence the library will break the words into its syllables and return the corresponding Cued Speech position from the available 64 positions.

Our app records incoming voice, converts it into text and sends it to the python script running on a flask server on Google Cloud. The text is given to the library that processes it to give the corresponding cued speech position. We then use State of the Art Deep learning framework to fetch a "fake" face from thispersondoesnotexist.com and apply lip sync to it.

The lip synced video and corresponding cued speech positions then go into a simple script that animates the hand positions into the video. The video is then sent back and played in our app.

Although the library was doing a pretty good job, when given a word outside of the CMU Dictionary of pronunciation it would throw a KeyError. To fix this we trained a Transformer model on all the existing pronunciations from the CMU Dictionary to achieve State of the Art model for text to phoneme. Given a word outside of the dictionary the script would ask the model to predict the pronunciation of the new word instead.

Summary

  1. Input speech is interpreted as text and sent to python script running on GCP.
  2. A python library takes this text and gives the corresponding Cued Speech position(Using CMU Dictionary).
  3. If the word from the sentence is not in the dictionary, a model trained on the dictionary is asked to predict the pronunciation for this new word.
  4. Text in the form of audio (using gTTS python library) along with a face of a human that does not exist is sent to a deep learning model(Wav2Lip) that animates the lip sync.
  5. Video and the corresponding positions are sent to a script that animates the hand positions into the video.
  6. The app plays the video.

Challenges we ran into

Learning Cued Speech and writing an effective python library for the same Integrating GCP with Android Studio Implementing a translation architecture with transformers for text to phoneme

Accomplishments that we're proud of

A real time speech to Cued Speech app that makes it easier for natural language interpretation without the need of a Sign Language interpreter.

What we learned

Cued Speech and how to converse in it, build and run apache2 servers on Linux VM's and (almost) everything about transformers, the state of the art deep learning architecture

What's next for interact.ai

We plan on integrating ASL and generalizing the app. We plan to make this a flexible API and opensource it for further usage.

Built With

Share this project:

Updates