Inspiration

We live in a time where it is easier than ever to communicate with others, thanks to technology. We have always wanted to learn sign language to be able to connect with even more individuals, including those with hearing impairments. Plus, a childhood game of ours is Cooking Mama, a cooking game that throws a series of mini obstacles at you to successfully cook a meal. We wanted to combine these two visions to bring a cozy, game experience for those also it interested in learning sign language.

What it does

It is a cooking game aimed at teaching American Sign Language (ASL) through a virtual cooking experience, where you cook a recipe by signing the instructions in front of your webcam. For this hackathon, the recipe is scrambled eggs. For each instruction, a keyword is highlighted from the sentence and a guided video playing shows how to sign the ASL sign, can be found at the bottom of the screen. showing the user how to sign the highlighted word. You progress when the computer vision model successfully recognizes that your hand gesture. Turning the learning experience into a game makes learning that much more fun.

How we built it

We leveraged a pretrained sign language computer vision model, and fined-tuned it by creating our own dataset of live recordings. This customization improved gesture recognition accuracy and improved our user's experience when to use the webcam to detect the user's hand gestures for our game's recipe. Our webapp is built with React for the frontend, and the backend uses Django to handle the gesture processing received from the client.

Challenges we ran into

  • A New Stack: Some of us were working with React and machine learning models for the first time. Understanding how to integrate the pre-trained model into our project and building new React components required serious learning on our part, but it ultimately propelled us forward.
  • Training The Model Issues: Our initial model had poor accuracy due to inconsistent training data (person's height in videos). The model unintentionally focused on hip and leg movements instead of hand gestures. After refining our dataset and retraining, we achieved significantly better results.
  • Issues With Initial Frontend Design: Implementing our original UI design caused persistent errors, leading to a long debugging session. Instead of staying stuck, we pivoted to more of a Cooking Mama-inspired aesthetic, which aligned better with our vision.
  • Late night impasse: After connecting the frontend and backend, we ran into issues trying to pass video data to the model in the backend. This was a huge concern.

Accomplishments that we're proud of

  • Our Resilience: Despite the challenges we faced, we are proud of the work we managed to make. There were many moments where we could have just quit and gone home to get a good night sleep, but we stuck it through and gave it our best.
  • Our Adaptability: Being able to learn enough about a new technology to be able to use it in a project in less than 24 hours is something we can all be proud of.

What we learned

  • Our Team Members: When facing trials, who you have on your team will make the difference of feeling defeated or mustering the energy to keep moving forward.
  • Thinking On Our Feet: Knowing when to quit and pivot to a new idea/path.

What's next for Signing Mama

  • Build and Train Our Own Model: Due to short timeframe of hackathon, we chose to use a pre-built model. Going forward, we would like to create a more tailored model for the purposes of our project and introduce more diversity into the training data.
  • Develop More Recipes: Create more levels of varying difficulties to help build better ASL vocabulary among users

Built With

Share this project:

Updates