Inspiration

While brainstorming what to build, it hit me: we have translation apps for almost every spoken language, but there is a massive gap when it comes to Sign Language. For someone who relies on signing, standard translation apps are honestly useless. I wanted to create something that finally included them in the conversation.

What it does

HandyTalk is a two-way communication bridge. A user can upload a video of themselves signing, and Gemini translates those movements into text for others to read. To make it a real conversation, I also included a feature for hearing people: they can type text, which the app then "translates" back into signs by pulling from a dedicated sign library or dictionary.

How I built it

I kept the build focused on the power of AI and simple Python logic: Python & Streamlit: I used Python to write the core logic and Streamlit to create a clean, functional web interface. Gemini 3 Flash API: This is the "brain" of the project. I used its multimodal capabilities to analyze uploaded videos and describe the signs in text. OpenCV: For handling the video files and frame processing. Sign Library: A structured dictionary of video clips that the app calls upon whenever a hearing person types a matching keyword.

Challenges I ran into

The biggest hurdle was definitely real-time translation. While the API is powerful at analyzing uploaded clips, achieving that same "instant" feel without massive hardware resources is difficult. Because I wanted the translations to be accurate rather than laggy, I had to focus on the video-upload method for this prototype.

Accomplishments that I'm proud of

I’m proud to be working on a social topic that actually matters. Seeing a project move from just an idea to a tool that represents a community that is often overlooked in tech is incredibly rewarding. It’s not just code; it’s a step toward better representation.

What I learned

I realized it’s not always about building the most complex "best" system; it’s about what your project actually does. Creating a massive, flashy project that doesn't solve a real-life problem is worse than creating a simpler, honest project that actually makes a difference in how people connect.

What's next for HandyTalk

The prototype is just the beginning. The roadmap includes: Real-time Open Camera: Moving from uploads to a live feed. Custom Large Datasets: Training on more diverse signs to improve accuracy. On-device Gesture Recognition: Using faster logic (possibly C++) to reduce API dependency and make the app more responsive.

Built With

Share this project:

Updates