Inspiration

Every year, 240 million calls come into 911. And right now, if you speak Spanish, Hindi, Mandarin, or Arabic, the system breaks. Dispatchers spend 2 to 4 minutes locating an interpreter while someone is having a heart attack on the other end of the line. 25 million Americans speak limited English. 10,000 people die annually from preventable dispatch errors. We built Salus because the language you speak should never determine whether you live or die.

What it does

Salus is an AI emergency dispatch co-pilot. When a caller speaks in any language, Salus transcribes and translates their speech, assesses the severity, recommends the appropriate units to dispatch, and provides real-time instructions back to the caller in their own language. The human dispatcher sees everything on a clean English dashboard — incident type, severity level, recommended units, live translated transcript, in under 2 seconds. Salus doesn't replace dispatchers. It gives them a superpower.

How we built it

The full pipeline runs on Eigen and Boson's infrastructure:

  • ASR: Boson Higgs Audio Understanding v3.5 for real-time multilingual transcription
  • LLM: Eigen GPT-OSS 120B for triage reasoning, severity classification, unit recommendation, and translation
  • TTS: Eigen Higgs 2.5 for synthesizing calm dispatcher voice responses back to callers in their language
  • Backend: FastAPI + WebSockets for low-latency bidirectional audio streaming
  • Frontend: React + Vite + Tailwind — live dispatch dashboard with severity alerts, animated unit badges, waveform visualizer, and language detection

The audio pipeline is: caller speech → parallel ASR chunks via asyncio.gather → dispatch brain LLM → TTS synthesis → streamed back to caller, all within a single WebSocket session.

The dispatcher AI is built as a persona, not a form-filler. The system prompt models an 18-year veteran dispatcher — trained to ask one thing at a time, never repeat questions, and stay calm under pressure. An internal_reasoning scratchpad field forces a chain of thought before every response.

Challenges we ran into

Latency was the hardest problem. Early builds had 10–15 second round-trips. The root cause was a single line: Eigen's TTS API requires multipart/form-data, but our HTTP client was sending application/x-www-form-urlencoded. The fix was changing data={} to files={} — but finding it required hours of WebSocket tracing, magic-byte audio validation, and queue debugging.

Getting the LLM to sound human was equally hard. Early versions felt like a chatbot filling out an incident report. The breakthrough was rewriting the system prompt as a full persona with embedded good/bad response examples, explicit "ONE thing per turn" enforcement, and a "questions already asked" tracking field to prevent repetition.

Browser autoplay policy blocked audio playback on the frontend until we added global gesture listeners and an <audio> element fallback for when decodeAudioData failed silently.

Accomplishments that we're proud of

  • Sub-2-second multilingual transcription and translation in a live call context
  • A dispatch AI that genuinely sounds like a calm, experienced human operator under pressure
  • Full scripted demo scenarios in English, Spanish, and Hindi, the exact language barrier use case the system was built to solve
  • A live call mode that works end-to-end: speak into the mic, get a real dispatcher response in your ear, in real time

What we learned

Multipart encoding will ruin your life if you're not paying attention. But more importantly, the hardest part of building AI for high-stakes domains isn't the model, it's the persona. A language model that sounds like a bureaucrat filling out a form is worse than useless in a 911 context. The prompt engineering to make Salus feel human was as important as any infrastructure decision we made.

What's next for Salus

  • Integration with real CAD (Computer-Aided Dispatch) systems used by municipal 911 centers
  • Support for 20+ languages with regional dialect awareness
  • Offline-capable fallback for rural areas with poor connectivity
  • Pilot program with a real dispatch center
  • FCC and NENA compliance review for production deployment

Built With

Share this project:

Updates