Inspiration

We were inspired by the theme of Tech for Good and the desire to make learning American Sign Language (ASL) more accessible, interactive, and fun. Communication is a fundamental human right, yet there are still major gaps in access between hearing individuals and the Deaf/Hard of Hearing community. We wanted to build a tool that not only teaches ASL in a beginner-friendly way but also uses AI to support and guide users throughout their learning journey.

What it does

  • Sign2Me is a web app that helps users learn and practice ASL letters through:

  • Real-time hand tracking with MediaPipe

  • ASL gesture recognition powered by a custom-trained machine learning model

  • Interactive practice sessions that test users on random signs

  • AI-generated feedback using Google’s Gemini API to coach users as they sign

  • Responsive UI built with React and Tailwind CSS

The app tells you which letter to sign, uses your webcam to track your hand, and gives you feedback to help improve your accuracy. Once the correct sign is detected, you're prompted to move on to the next letter!

How we built it

We used a full-stack architecture:

Frontend: Built with React, styled with Tailwind CSS, and hosted on Vercel

Backend: A Flask server that:

  • Processes live webcam input via MediaPipe

  • Predicts the signed letter using a custom-trained ML model

  • Uses Gemini (Google Generative AI API) to provide real-time, sign-specific feedback

Deployed using Railway

Machine Learning:

  • Trained a classifier on our own custom-collected samples

  • Focused on x/y coordinate data from MediaPipe hand landmarks to improve generalization and reduce 3D noise

Challenges we ran into

Model Confusion: Letters like “C,” “G,” and “O” were visually similar and hard to distinguish—especially with variations in hand size, angle, and lighting

React + Tailwind + Webcam: Integrating the MediaPipe webcam stream broke some Tailwind/PostCSS configurations, leading to a full rebuild of our frontend environment

Timing Feedback Logic: Designing a system that could provide non-intrusive real-time coaching (not just after a correct answer) required careful state management and prompt design

Accomplishments that we're proud of

Successfully integrated a real-time ML pipeline between React and Flask

Built a beautiful, accessible UI that provides visual user feedback

Used Gemini AI not just for basic prompts, but to dynamically help users improve based on their actual sign input

Created a working MVP that helps users learn ASL letters in a fun, encouraging way

What we learned

How to deploy and connect a full-stack web app with React + Flask

How to train and tune a MediaPipe-based ASL recognition model using minimal features (just x/y coordinates)

How to use frameworks like React and Tailwind

How to prompt-engineer Gemini for contextual guidance based on model outputs

How to balance UX, accessibility, and technical functionality for a more inclusive app

What's next for Sign2Me

We’d love to keep building on this foundation! Some next steps include:

Expanding the model to support two-handed signs and dynamic gestures like J and Z

Adding user login and streak tracking to encourage habit-building

Introducing multi-language UI options for broader accessibility

Creating lesson-based modules with progressive difficulty

Exploring voice-to-sign translation for real-time communication

Built With

Share this project:

Updates