Auralis was built to address a critical gap in emergency response: the loss and fragmentation of information as it moves from a 911 call to paramedics to hospital intake. In time-sensitive situations, even small pieces of missing context can delay care or lead to misinterpretation.

We set out to design a system that treats audio not just as something to transcribe, but as a source of structured, transferable intelligence. Auralis combines speech transcription with acoustic signal analysis to capture both what is being said and how it is being said. This allows us to extract signals such as pauses, breathing irregularities, and urgency cues alongside textual content.

The system is built around a unified pipeline that powers three stages of the emergency workflow: dispatcher intake, paramedic field reporting, and hospital preparation. Instead of duplicating logic across stages, the same lightweight analysis engine feeds different role-specific views, ensuring continuity of information across the entire chain.

On the frontend, we focused on clarity and usability. Each stage presents only the information relevant to that role, while still allowing deeper inspection through detailed analysis views. We also implemented a simple transfer and handoff system to simulate how responsibilities shift across teams in real-world scenarios.

One of the key challenges was balancing technical capability with simplicity. Rather than over-engineering with heavy models or complex diarization, we relied on efficient signal processing and heuristic-based interpretation to keep the system fast, explainable, and demo-ready within the hackathon timeframe.

Auralis is not a medical device and is not intended for diagnosis or clinical decision-making. It is a prototype that demonstrates how intelligent processing of audio can improve coordination, reduce information loss, and enable more seamless communication across emergency care systems.

Built With

Share this project:

Updates