Inspiration

During emergencies, communication can be the difference between life and death. Here's a scary thought: what happens during an emergency if you can't speak? Millions of people who are deaf, hard-of-hearing, non-verbal, or experiencing a panic attack are forced to rely on systems that assume verbal communication. Texting 911 isn’t widely available, and long messages are hard to write in moments of panic. We wanted to build something that lets anyone ask for help fast, clearly, and without needing to speak or type.

What it does

Hear2Help is an accessibility-first, minimal-touch emergency assistant designed for users who cannot communicate verbally. It works in just a few taps and provides visual, tactile, and AI-powered support. Key Features

  1. Tap-Based Emergency Mode with large, color-coded buttons
  2. AI Speech Generation (ElevenLabs) that speaks on behalf of the user
  3. Location Awareness to show nearby hospitals/police stations
  4. Accessibility-Optimized UI (high contrast, large tap areas, dyslexia-friendly fonts)
  5. Calm Mode for panic attacks (breathing guide + grounding prompts)
  6. Triple-Tap Emergency Shortcut to instantly trigger SOS mode
  7. Quick Medical Profile for bystanders
  8. Multilingual Interface

How we built it

Tech Stack: Typescript + Next.js Speech: ElevenLabs real-time TTS API ChatBot: Gemini API Translation: Google Translate Widget Location: Geolocation API + WiFi/GPS heuristics Design: WCAG-inspired color palette, accessibility testing with screen readers

Challenges we ran into

  1. Designing a UI that works under high stress without overwhelming users
  2. Getting real-time speech generation to be clear in noisy environments
  3. Implementing the triple-tap gesture reliably
  4. Coding-specific challenges such as getting translation to work, enabling geolocation services, etc
  5. Making accessibility decisions without compromising simplicity

Accomplishments that we're proud of

  1. A fully working emergency assistant that requires zero speaking or typing and can actually help people in real-life emergencies
  2. Real-time AI speech that sounds natural and helps users communicate instantly
  3. Successfully implemented the triple-tap SOS gesture
  4. A real-time chatbot that can help answer any emergency questions
  5. Translation services in any language
  6. Geolocation services that show the nearest hospital, police station, landmark, and your own location

What we learned

  1. Accessibility is not just a feature since it changes every design decision
  2. Simplicity is surprisingly hard when designing for panic scenarios
  3. Speech synthesis is more powerful than we expected for assistive tech
  4. Building for diverse disabilities taught us to rethink assumptions we make on the daily

What's next for Hear2Help

  1. Multi-language voice output so travelers can get help abroad
  2. Support for emergency texting services (where available)
  3. Smartwatch integration for quicker SOS activation
  4. User testing with non-verbal and deaf communities to refine flows

Built With

Share this project:

Updates