Inspiration

The inspiration for SILENCE came from the communication challenges that non-verbal individuals face, especially in critical situations like medical emergencies. We realized that many of these individuals rely on sign language, but there is no easy way for them to make phone calls to clinics or hospitals. Our goal was to create a tool that empowers them with independence and accessibility in healthcare communication.

What it does

SILENCE is a program that translates sign language into speech, allowing non-verbal individuals to make phone calls. A user signs in front of a camera, the system recognizes their gestures, converts them into text, and then outputs spoken language. This enables effective communication with doctors, nurses, and emergency responders over the phone.

How we built it

We developed SILENCE using:

  • Mediapipe for hand tracking and gesture detection.
  • Pickle for storing trained models and data.
  • TensorFlow for recognizing and classifying hand gestures.
  • OpenCV for real-time image processing.
  • Text-to-Speech (TTS) engines to convert recognized text into spoken language. We combined these technologies to create a functional prototype that can recognize signs and translate them into speech.

Challenges we ran into

  • Gesture Recognition Accuracy: Some signs were difficult to distinguish due to minor hand position variations.
  • Processing Speed: Ensuring real-time recognition without lag was challenging.
  • Sign Language Variations: Different regions use different versions of sign language, making it difficult to build a universal model.
  • Integration with Phone Calls: Finding a smooth way to integrate the system with real-world phone communication was another obstacle.

Accomplishments that we're proud of

  • Successfully developing a working prototype in a limited timeframe.
  • Creating a system that allows non-verbal individuals to communicate effectively over the phone.
  • Overcoming technical challenges in sign recognition and text-to-speech conversion.
  • Making an impact by contributing to accessibility and inclusivity in healthcare communication. ## What we learned
  • The importance of real-time processing in accessibility tools.
  • How different technologies (Mediapipe, TensorFlow, OpenCV) can be integrated for gesture recognition.
  • The complexity of sign language and the need for adaptability in AI-based recognition models.
  • The significance of user-centered design when developing assistive technology.

What's next for SILENCE

  • Expanding Language Support: Implementing recognition for multiple sign languages.
  • AI Integration: Using AI to track movement rather than analyzing static hand gestures.
  • Two-Way Communication: Adding a feature that translates spoken words into sign language animations.
  • Emergency Services Collaboration: Partnering with emergency responders to ensure non-verbal individuals can communicate in urgent situations.
  • Mobile App Development: Creating a user-friendly mobile app to make the technology more accessible to a wider audience.

SILENCE has the potential to significantly improve accessibility for non-verbal individuals, and we are excited about its future development!

Built With

  • mediapipe
  • opencv
  • pickle
  • tensorflow
Share this project:

Updates