Inspiration

The idea for DRAS came from witnessing inefficiencies in real disaster responses—whether during hurricanes or conflicts like Gaza. Problems such as communication breakdown, manual coordination, and language barriers make lifesaving decisions harder. We asked: What if disaster response was as simple as having a conversation?

What it does

DRAS transforms disaster coordination into a two-phase AI conversation:

  1. Context Phase – Emergency managers describe cities and resources in plain English (e.g., “LA has 4M people, LA Hub has 100K medical supplies”). AI extracts structured data, builds inventories, and generates interactive maps.
  2. Crisis Phase – Users describe the disaster (e.g., “7.2 earthquake in LA, 2.5M people affected”). AI analyzes the situation, optimizes resource allocation with distance-based algorithms, and outputs actionable recommendations with visual coverage charts.

In minutes, coordinators get deployment plans, visualizations, and response strategies—without complex software or hours of manual work.

How we built it

  • Architecture – Clean separation of layers: TwoPhaseDisasterGUI (UI), DisasterCoordinator (logic), GPTOSSEngine (AI).
  • AI Integration – GPT-OSS with few-shot learning for reliable extraction.
  • Optimization – Distance-weighted priority scoring balances availability, distance, and severity.
  • Visualization – Real geographic projections, dynamic resource markers, and animated disaster indicators.

Challenges we ran into

  1. Pre-trained Model Shift – GPT-OSS didn’t follow instructions like ChatGPT. Fixed with demonstration-based prompts.
  2. Resource Duplication Bug – Caused by hardcoded map positions. Fixed with real coordinate projection.
  3. Resource Display Errors – Fixed divisor logic to correctly show resource counts (e.g., “100K”).
  4. JSON Parsing Issues – Solved with robust brace-matching.
  5. Complex State Management – Added reset functionality for fresh simulations.

Accomplishments that we're proud of

  • Innovation – Two-phase UX, AI-only extraction, real-time geographic mapping.
  • Efficiency – 70% code reduction; <3 min response time for extraction + optimization.
  • Performance – 95%+ extraction accuracy; scales to 10+ cities & 20+ centers.
  • User Experience – Enterprise-grade UI, instant reset/restart, clear visual reasoning.

What we learned

  1. Pre-trained vs. Instruction-tuned Models – GPT-OSS requires demonstrations, not commands.
  2. Simplicity Over Features – Emergency responders prefer reliability over complexity.
  3. Two-Phase Thinking – Mirrors how coordinators naturally plan: first context, then crisis.
  4. Visualization Builds Trust – Clear maps, charts, and progress bars make AI decisions transparent.

What's next for DRAS

  • Use instruction-tuned models for more robust extraction.
  • Integrate real-time data sources (e.g., Google Maps).
  • Test in simulated crisis environments.
  • Expand to multilingual and global disaster scenarios.

Built With

Share this project:

Updates