Inspiration

This project came from a genuinely scary experience for our team. When our teammate Haashir's grandfather had a medical emergency and 911 was called, they were put on hold. Even though it was officially under a minute, it felt much longer in the moment. In situations like that, every second matters. In cardiac emergencies, for instance, survival chances can drop by around 7-10% with each passing minute without help.

That experience made something very clear to us: even short periods of silence during emergency calls can have serious consequences. Dispatch centers are often overwhelmed, dealing with staffing shortages and high call volumes, but that doesn’t mean callers should be left waiting without any support. That’s what pushed us to build an AI-assisted emergency intake system to turn that dead time on hold into useful, life-saving information.

What it does

SIREN is an AI-assisted emergency intake system designed to step in immediately when a caller is placed on hold. Instead of waiting passively in silence, the caller can start describing what’s happening. As they speak, SIREN listens and builds a live, continuously updated report for the dispatcher. It instantly extracts critical information—such as the caller's location, the nature of the emergency, and its severity—so dispatchers have the context they need the second they pick up the line.

How we built it

At the core of the system is natural language processing (NLP). We built a multi-layered architecture to process emergency calls in real time: Keyword Scoring: The system listens to the caller and pulls out key details. Phrases like “not breathing,” “bleeding,” or “fire” are flagged right away. Each keyword has a weight, and the severity score grows over time so nothing critical gets overlooked. Localized Context Engine: We trained the system on city-specific data, including local slang, common landmarks, and unofficial place names. Combined with GPS data, this allows the system to translate what callers say into precise, usable locations. Live Translation Pipeline: We integrated a real-time translation pipeline to ensure language barriers do not delay critical care.

Challenges we ran into

Turning a caller’s speech into English text for the dispatcher was relatively straightforward. The real challenge was figuring out how dispatchers could talk back to non-English speakers in real time without adding latency.

We ended up designing a hybrid solution: Quick Replies: For common responses, the system includes quick, one-click options like "Help is on the way" or "Are they breathing?" These are instantly played back to the caller in their native language using text-to-speech. Complex Translation: For complex situations (like walking someone through CPR), the dispatcher can speak normally in English. The system transcribes it, translates it, and converts it back into speech for the caller. Human Handoff: If the situation gets too complex, the system automatically brings a human translator into the call while it’s still gathering information, ensuring the translator is ready to step in as soon as the dispatcher joins.

Accomplishments that we're proud of

  • Creating a functional two-way translation system: We successfully bridged the communication gap between dispatchers and non-English speaking callers without losing critical time.
  • Building a localized context engine: We successfully trained the AI to understand not just standard addresses, but the highly contextual, localized ways people actually describe their locations during a panic.
  • Meaningfully reducing response times: By gathering critical data while the caller is on hold, our system can effectively shorten response times by 20 to 40 seconds per call, time that can literally save lives.

What we learned

One of our biggest takeaways is that AI in emergency response shouldn’t replace people; it should support them. The goal is to fill in the gaps. More than anything, this project reinforced a simple idea: when someone reaches out for help, they shouldn’t feel unheard. Technology can ensure they are acknowledged immediately and better prepared for what comes next.

What's next for SIREN- 911 Dispatch Ai

Moving forward, we want to expand our Localized Context Engine to support more cities and regions, adapting to a wider variety of dialects and local landmarks. We also plan to explore direct integrations with existing Computer-Aided Dispatch (CAD) systems so SIREN's live reports can be injected seamlessly into the software dispatchers already use every day. Ultimately, we want to pilot SIREN with real dispatch centers to refine the UI/UX based on direct feedback from 911 operators.

Share this project:

Updates