Inspiration

Millions of people with speech disabilities struggle to communicate freely in everyday life — from making a simple phone call to ordering food. We wanted to create a bridge between voice and understanding — an AI-powered companion that lets everyone express themselves effortlessly, regardless of how they speak.


What it does

Bridge is an accessibility app that enables seamless two-way communication between speech-disabled individuals and others.

  • Users can type or select pre-set messages, which Bridge converts to realistic speech using AI voice synthesis.
  • Incoming speech is transcribed instantly into text and displayed on-screen, allowing real-time conversation.
  • It also supports call mode, where users can talk to others over phone calls — powered by speech-to-text (STT) and text-to-speech (TTS) modules.
  • The interface includes a floating assistant button, accessible anywhere on the device for quick replies.

How we built it

  • Frontend: Built with Android (Java, XML) using a clean Material UI and accessibility-first layout.
  • Speech-to-Text: Implemented using Google SpeechRecognizer API for live transcription.
  • Text-to-Speech: Integrated TTS engine for expressive voice output.
  • AI Layer: Optional Gemini / LLM integration for smart reply suggestions and context-based phrasing.
  • Storage: Conversations stored in MongoDB Atlas for personalization and possible RAG-based suggestions.
  • Additional tools: Firebase for authentication and analytics, and Threads for parallel speech detection and UI rendering.

Challenges we ran into

  • Ensuring low-latency speech recognition while keeping the app lightweight.
  • Handling permissions and background service for floating buttons across devices.
  • Synchronizing speech recognition, UI updates, and TTS playback without lag.
  • Designing an interface that is simple enough for accessibility yet modern and elegant.

Accomplishments that we're proud of

  • Built a fully functional communication bridge that works both for text and calls.
  • Designed an inclusive UI that balances accessibility and aesthetics.
  • Created a floating overlay button that stays available across apps.
  • Successfully integrated AI for context-aware text suggestions in real-time conversations.

What we learned

  • Deep understanding of speech processing pipelines (STT, TTS, threading, and async event handling).
  • How to design with accessibility as a core principle, not an afterthought.
  • Effective use of multi-threading in Android to handle simultaneous input/output streams.
  • How to make AI useful but not distracting, focusing on real human interaction.

What's next for Bridge

  • Integrate emotion-aware AI voices that adapt tone to context.
  • Add multilingual support and offline mode for global accessibility.
  • Build a Bridge Agent — an intelligent assistant that analyzes past conversations and helps generate better replies using RAG + Atlas Vector Search.
  • Launch Bridge on Google Play Store and partner with assistive communication NGOs to make it free for those in need.

Built With

Share this project:

Updates