๐ Inspiration
Communication is a fundamental human right โ but millions in the deaf and hard-of-hearing community still face daily obstacles in being understood. While American Sign Language (ASL) is a powerful bridge, it remains inaccessible to most people unfamiliar with it.
I built ASL Learning Buddy to change that.
My goal was simple: make ASL approachable through technology. Not just as a learning tool, but as a platform that empowers real-time interaction using sign language โ turning curiosity into capability.
๐ก What It Does
ASL Learning Buddy is a web-based platform designed to:
- ๐ง Teach ASL signs through clean, interactive modules
- ๐ท Recognize hand signs in real-time using webcam input
- ๐ Translate ASL to text on the fly
- ๐ Convert text to speech, enabling two-way communication
The platform isn't just for learners โ it's a practical bridge between hearing and non-hearing communities.
๐ ๏ธ How I Built It
I used a combination of modern web tools and machine learning to bring this to life:
- Frontend: React.js with Vite, TypeScript, and Tailwind CSS for a fast, responsive UI
- Model Integration: A Hugging Face model currently powers the hand sign detection
- Webcam & Media API: Real-time video stream is processed directly in the browser
- Text-to-Speech: Integrated via native browser speech synthesis
Everything is deployed on Vercel, making the app globally accessible within seconds of deployment.
๐งฑ Current Challenges
This is just the beginning โ and like any good version 1.0, itโs a working prototype with clear room for growth:
- โ๏ธ Model Accuracy: The current Hugging Face model isnโt hitting the reliability I expect. Different lighting, hand sizes, and speeds impact the predictions. I'm already working on a custom TensorFlow model paired with MediaPipe for better precision.
- ๐ธ๏ธ UI Polish: While functional, the UI needs more finesse. I plan to add smoother transitions, better visual feedback, and overall aesthetic polish.
- ๐งช Edge Cases: Fast hand movements or partially-visible signs confuse the model โ something Iโll address in the custom build.
๐ Achievements So Far
- Developed a fully functional ASL platform from scratch โ design, code, deploy
- Integrated live webcam input with AI-powered sign detection
- Enabled bi-directional communication: from signs to text to speech
- Got valuable feedback from early testers, including members of the deaf and hard-of-hearing community
- Made it accessible for free on the web โ no downloads, no barriers
๐ What I Learned
- How to work with and fine-tune AI models in-browser using TensorFlow.js
- The balance between technical complexity and accessibility
- Designing with empathy โ making every click and interaction count for all users
- That even an early prototype, if built with intent, can spark genuine impact
๐ฎ Whatโs Next for ASL Learning Buddy
This version is just a prototype โ a glimpse of whatโs coming.
Hereโs what Iโm building next:
- โ A custom-trained sign recognition model using TensorFlow + MediaPipe
- ๐ฑ A mobile-friendly version for on-the-go use
- ๐งฉ Gamified learning paths and achievements
- ๐ฃ๏ธ Voice-to-sign translation โ flipping the interaction the other way
- ๐ค Collaborating with educators and accessibility advocates to refine the experience
This is more than a tool โ itโs a mission. One Iโm just getting started on.
๐ Final Thought
Technology should be a bridge, not a barrier. With ASL Learning Buddy, Iโm building a world where learning to communicate is easy, inclusive, and empowering for everyone.
Built With
- css
- hugging-face
- javascript
- python
- react.js
- typescript
- vercel
- vite
Log in or sign up for Devpost to join the conversation.