SignAI - Breaking Communication Barriers with AI

Inspiration

466 million people worldwide have disabling hearing loss. I witnessed a deaf individual unable to communicate during an emergency - couldn't call for help, couldn't be understood. That moment changed everything.

Most hearing people don't know sign language. Deaf individuals face daily barriers in hospitals, stores, and emergencies. Communication is a fundamental human right.

SignAI translates sign language into speech instantly, in any language, accessible to anyone with a browser.


What It Does

Real-time AI sign language translator running entirely in your browser:

Camera → MediaPipe AI → Gesture Recognition → Sentence Builder → Translation → Speech

  • Recognizes 9 essential ASL gestures (88-95% confidence)
  • Combines gestures into natural sentences ("Help" + "You" = "Do you need help?")
  • Translates to 6+ languages (Hindi, Tamil, Telugu, French, Spanish)
  • Speaks output using text-to-speech
  • Zero installation, zero backend

How We Built It

Tech Stack:

  • MediaPipe Hands (Google AI) - Real-time hand tracking with 21 landmarks
  • JavaScript - Custom geometric gesture classifier
  • Canvas API - Visual feedback with hand skeleton
  • Web Speech API - Text-to-speech
  • MyMemory API - Neural translation

Key Innovations:

  • Geometric ratio-based finger detection (tip-to-wrist vs PIP-to-wrist)
  • Hand-specific gestures (Help requires right hand only)
  • Hardware-accelerated rendering (60fps)
  • Gesture stabilization buffer (removes jitter)
  • Adjustable detection speed (Fast/Normal/Learning modes)

Challenges

1. Gesture Confusion (Hello vs Help)

  • Both have 4-5 fingers extended
  • Solution: Finger spread analysis + hand-specific detection

2. Camera Performance

  • Video stuttering made detection unreliable
  • Solution: Hardware acceleration + frame skipping + reduced model complexity

3. Translation Quality

  • Free API returned garbage ("Delhi metro me apka swagat hai")
  • Solution: Keyword filtering + validation checks

4. Gesture Precision

  • Thumbs up/down triggering incorrectly
  • Solution: Strict priority ordering + tightened thresholds + Y-axis validation

Accomplishments

  • 100% browser-based (no installation, no backend)
  • Real-time 30fps AI inference client-side
  • 88-95% gesture accuracy
  • Multilingual support (6+ languages)
  • Production-ready UX (history, export, confidence meter, speed control)

What We Learned

  • Computer vision is hard - Geometric ratios beat absolute positions
  • Performance matters - 60fps requires hardware acceleration + frame skipping
  • Simplicity wins - Reduced from 17 to 9 gestures for better accuracy
  • 466M people need this - Technology can bridge communication gaps

What's Next

Immediate:

  • Full ASL alphabet (A-Z)
  • 50+ vocabulary words
  • Offline mode
  • Mobile optimization

Advanced:

  • Two-hand gestures
  • Voice-to-sign (reverse translation)
  • Video call integration (Zoom/Meet overlay)
  • Personal custom gestures

Scale:

  • Partner with deaf communities
  • Healthcare kiosks
  • Education platform
  • Open-source SDK

Impact

Who Benefits:

  • 466M people with hearing loss
  • Families with deaf members
  • Healthcare workers treating deaf patients
  • Service workers assisting deaf customers

Real Scenarios:

  • Emergency rooms: Deaf patient signs "Help" → Doctor understands
  • Grocery stores: Customer asks "Where is milk?" via gestures
  • Airports: Travelers navigate security
  • Video calls: Remote communication with live translation

Why It Matters:

"Accessibility is not a feature. It's a fundamental human right."

SignAI is free, instant, universal, and multilingual. No subscriptions, no downloads, no barriers.


SignAI: Where hands speak, and AI listens.

Built With

Share this project:

Updates