🤟 Sign Language Recognition for Deaf and Dumb

Welcome to Sign Language Recognition for Deaf and Dumb — an innovative real-time application built to bridge communication gaps for the deaf and hard-of-hearing community. Leveraging the power of machine learning and computer vision, this project recognizes American Sign Language (ASL) gestures through a webcam, enabling seamless, intuitive communication for everyone.

“Technology is best when it brings people together.”
This application embodies that vision by empowering the deaf community with a tool that recognizes and translates their gestures into digital interaction.


🌟 Project Tagline

Empowering Communication Through Vision — Real-Time ASL Recognition for All.


📌 What Inspired Us?

The spark behind this project was the realization that millions of people face daily communication barriers due to hearing and speech impairments. We wanted to build a tool that not only addresses this but also promotes inclusivity, awareness, and technological empowerment. With rapid advancements in AI, we saw an opportunity to combine gesture recognition with real-time feedback to make sign language more accessible and interactive.


📚 What We Learnt

  • Deep dive into gesture recognition using MediaPipe Hands and its application in real-time scenarios.
  • Effective integration of React, Redux, and Firebase to build scalable and responsive front-end and back-end systems.
  • Best practices in state management, live data processing, and adaptive model training.
  • Implementing leaderboards, authentication, and progress analytics using modern tech stacks.

🛠️ How We Built It

We started by collecting a custom dataset of ASL alphabets and words. Then we used MediaPipe Hands to track gestures and map them to trained data. A React.js frontend provided a dynamic user interface, while Firebase handled authentication, real-time database, and hosting. Redux was used for managing the application state and Redux-Thunk enabled async logic.

The project is structured to continuously improve as users interact with it — learning their patterns and adapting for better accuracy.


🚀 Key Features

  • Real-Time Gesture Recognition: Detects and translates ASL signs instantly via webcam.
  • 📈 User Progress Tracking: Visual dashboard showing learned signs and engagement time.
  • 🧠 Adaptive Learning: The system improves recognition accuracy as more users interact.
  • 🏆 Global Leaderboard: Compete and see your ranking among other signers.
  • 🔒 Secure User Authentication: Powered by Firebase.
  • 📊 Insightful Visual Analytics: Charts and graphs show your learning journey.

🧠 How It Works

  1. MediaPipe Hands processes video input from the webcam to track finger and hand landmarks.
  2. These landmarks are fed into a trained machine learning model that maps gestures to letters or words.
  3. The output is displayed in real-time on the screen along with progress tracking.
  4. Firebase stores user-specific progress data, enabling feedback and leaderboard updates.

🧪 Model Training

  • Trained on 26 ASL alphabets + 16 frequently used ASL words.
  • Uses key hand landmarks (21 points) detected via MediaPipe for pattern matching.
  • Continuous learning by storing user gestures for further improvement.

🎯 Challenges We Faced

  • Mapping noisy gesture data to consistent output.
  • Fine-tuning the ML model to differentiate between similar ASL signs.
  • Achieving smooth webcam integration with real-time feedback.
  • Managing async state changes across Redux and Firebase.
  • Deployment configuration for Vite and Firebase Hosting compatibility.

💡 Future Enhancements

  • Add voice synthesis to convert recognized signs to speech.
  • Expand dataset to include full ASL phrases.
  • Enable multi-user chat using signs.
  • Integrate with AR devices for immersive experience.

Built With

Share this project:

Updates

posted an update

What's New

  • Real-Time ASL Gesture Detection using MediaPipe Hands
  • Recognition of 26 ASL Alphabets + 16 Common Words with high accuracy
  • Interactive User Leaderboards to foster learning through gamification
  • Learning Progress Tracker powered by Chart.js for personalized insights
  • Secure Authentication & Data Storage via Firebase Authentication and Firestore
  • Enhanced UX/UI with React, Redux, React-Toastify, and React-Webcam
  • Fully Responsive and Cross-Browser Compatible

Upcoming Features

  • Mobile-Responsive Enhancements
  • Support for Full Sentence ASL Recognition
  • Voice Output for Recognized Gestures
  • Personalized Learning Recommendations Based on Usage Data

- Multi-Language Support

Log in or sign up for Devpost to join the conversation.