Inspiration

USA has several deaf communities having 250k+ sign language users. We want to bridge the gap between vocal and mute.

What it does

The person who wants to understand sign language opens our app and points it towards the person talking in ASL (American Sign Language). The app captures the frame and sends it to the server. The server then analyzes this frame for the ASL character via pre-trained model and sends back the corresponding English language alphabet. This alphabet is placed in the augmented-reality environment next to the person in real-time.

How we built it

Challenges we ran into

  • Finding the relevant dataset and vastness of language. We finally trained only for characters 'A', 'B', 'C' and 'V'.
  • Finding a working API to capture image from AR camera in real-time.
  • Making the AR environment more presentable in given time frame.

Accomplishments that we're proud of

  • Be able to use so many different tools in <24 hours.
  • Be able to come up with a prototype on a smaller dataset.

What we learned

  • Used ARKit and tensorflow for the first time.
  • Learnt some ASL ourselves.

What's next for Inclusion - Talking with those who don't 'talk'

  • Making environment more presentable.
  • Training model for more characters/gestures.
  • Reduce latency while transmitting and processing image.

Built With

Share this project:

Updates