Inspiration
In mass casualty scenarios, responders operate under extreme stress and often miss critical signs like airway compromise or bleeding. We wanted to build a system that helps surface these signals quickly and reliably.
What it does
Aegis is a multi-modal triage support system that combines audio and visual inputs to assist with casualty prioritization. Users can click on individuals in a map-based UI to access respiratory analysis and visual cues, which are fused into a triage recommendation with reasoning.
How we built it
We built separate audio and vision pipelines that feed into a centralized triage engine grounded in SALT and MARCH protocols. The system uses rule-based scoring augmented with AI-assisted reasoning, and a Streamlit UI ties everything together into an interactive demo.
Challenges we ran into
The biggest challenge was realistic simulation—ensuring inputs didn’t bias outputs (e.g., default bleeding causing all patients to appear critical). Integration across subsystems and maintaining consistent data flow was also nontrivial.
Accomplishments that we're proud of
We built a full end-to-end system that integrates multiple modalities into a coherent, interpretable triage decision with a clean interactive interface.
What we learned
We learned that strong system design and integration matter more than individual models, especially in real-world, high-pressure applications.
What's next for Aegis
We’d improve data realism, incorporate live inputs, and further refine the triage engine for robustness and trust.
Log in or sign up for Devpost to join the conversation.