๐Ÿš€ Inspiration

Communication is a fundamental human right โ€” but millions in the deaf and hard-of-hearing community still face daily obstacles in being understood. While American Sign Language (ASL) is a powerful bridge, it remains inaccessible to most people unfamiliar with it.

I built ASL Learning Buddy to change that.

My goal was simple: make ASL approachable through technology. Not just as a learning tool, but as a platform that empowers real-time interaction using sign language โ€” turning curiosity into capability.


๐Ÿ’ก What It Does

ASL Learning Buddy is a web-based platform designed to:

  • ๐Ÿง  Teach ASL signs through clean, interactive modules
  • ๐Ÿ“ท Recognize hand signs in real-time using webcam input
  • ๐Ÿ” Translate ASL to text on the fly
  • ๐Ÿ”Š Convert text to speech, enabling two-way communication

The platform isn't just for learners โ€” it's a practical bridge between hearing and non-hearing communities.


๐Ÿ› ๏ธ How I Built It

I used a combination of modern web tools and machine learning to bring this to life:

  • Frontend: React.js with Vite, TypeScript, and Tailwind CSS for a fast, responsive UI
  • Model Integration: A Hugging Face model currently powers the hand sign detection
  • Webcam & Media API: Real-time video stream is processed directly in the browser
  • Text-to-Speech: Integrated via native browser speech synthesis

Everything is deployed on Vercel, making the app globally accessible within seconds of deployment.


๐Ÿงฑ Current Challenges

This is just the beginning โ€” and like any good version 1.0, itโ€™s a working prototype with clear room for growth:

  • โš™๏ธ Model Accuracy: The current Hugging Face model isnโ€™t hitting the reliability I expect. Different lighting, hand sizes, and speeds impact the predictions. I'm already working on a custom TensorFlow model paired with MediaPipe for better precision.
  • ๐Ÿ•ธ๏ธ UI Polish: While functional, the UI needs more finesse. I plan to add smoother transitions, better visual feedback, and overall aesthetic polish.
  • ๐Ÿงช Edge Cases: Fast hand movements or partially-visible signs confuse the model โ€” something Iโ€™ll address in the custom build.

๐Ÿ† Achievements So Far

  • Developed a fully functional ASL platform from scratch โ€” design, code, deploy
  • Integrated live webcam input with AI-powered sign detection
  • Enabled bi-directional communication: from signs to text to speech
  • Got valuable feedback from early testers, including members of the deaf and hard-of-hearing community
  • Made it accessible for free on the web โ€” no downloads, no barriers

๐Ÿ“š What I Learned

  • How to work with and fine-tune AI models in-browser using TensorFlow.js
  • The balance between technical complexity and accessibility
  • Designing with empathy โ€” making every click and interaction count for all users
  • That even an early prototype, if built with intent, can spark genuine impact

๐Ÿ”ฎ Whatโ€™s Next for ASL Learning Buddy

This version is just a prototype โ€” a glimpse of whatโ€™s coming.

Hereโ€™s what Iโ€™m building next:

  • โœ‹ A custom-trained sign recognition model using TensorFlow + MediaPipe
  • ๐Ÿ“ฑ A mobile-friendly version for on-the-go use
  • ๐Ÿงฉ Gamified learning paths and achievements
  • ๐Ÿ—ฃ๏ธ Voice-to-sign translation โ€” flipping the interaction the other way
  • ๐Ÿค Collaborating with educators and accessibility advocates to refine the experience

This is more than a tool โ€” itโ€™s a mission. One Iโ€™m just getting started on.


๐ŸŒ Final Thought

Technology should be a bridge, not a barrier. With ASL Learning Buddy, Iโ€™m building a world where learning to communicate is easy, inclusive, and empowering for everyone.

Built With

Share this project:

Updates