Inspiration

We wanted to make learning American Sign Language feel more interactive, motivating, and most of all-- accessible. Instead of static flashcards or passive videos, we built a browser-based experience where users can practice with their webcam, get instant feedback, and stay engaged through streaks, quizzes, and personalized coaching.

What it does

ChatASL teaches ASL letters through five learning modes: Learn, Practice, Quiz, Translate, and Spell. Users can sign letters in front of their webcam and receive real-time predictions, practice weak letters, track progress, and get AI-generated coaching and dashboard summaries. The app also supports personalized word generation for spelling practice based on the user's performance.

How we built it

We built the frontend with Next.js and React, styled with Tailwind CSS. Webcam input is captured in the browser and sent to a Roboflow inference workflow for ASL letter recognition. Supabase handles authentication, user data, sessions, streaks, and per-letter statistics. We also integrated Gemini through a server API route to generate customized practice words and user progress summaries, as well as providing AI-generated coaching to help users learn to correct their own mistakes.

Challenges we ran into

One challenge was making the webcam and prediction flow reliable across multiple pages and modes. We also had to handle API rate limits from Gemini, so we added fallback logic to keep the app usable when the model is unavailable. Another challenge was shaping the AI output into structured, useful learning content rather than generic text.

Accomplishments that we're proud of

We’re proud that the app feels like a complete learning product rather than a prototype. The real-time feedback loop, personalized spell mode, dashboard insights, and streak tracking all work together to make practice feel consistent and rewarding. We also successfully connected multiple services -- Supabase, Roboflow, and Gemini -- into one smooth user experience.

What we learned

We learned how much better learning tools become when they provide immediate feedback and adapt to the learner's weak spots. We also learned how to combine classification models and generative AI in a practical way: one model detects signs, while the other helps guide practice and motivation. On the engineering side, we learned how important fallback behavior is for keeping the app stable.

What's next for our project

Next, we want to expand beyond letter-level practice into words and phrases, improve progress analytics, and make the AI coaching even more personalized. We also want to refine the prediction pipeline, add more robust offline or low-latency support, and expand the platform into a more complete ASL learning companion.

Built With

Share this project:

Updates