Inspiration

Video https://drive.google.com/file/d/1XN997klIFmQ-SWGQHdO51Q8G7rBo4216/view?usp=sharing

Wildfires force families to juggle scattered updates (agencies, utilities, social), guess what’s trustworthy, and coordinate who’s safe—all while stress is high. We wanted one calm place to see the incident, trust-weight sources, track people and needs, and only run risky steps after a human says yes—plus voice when texting isn’t enough.

What it does

WorldFire Response (SafeSignal) is a web MVP for household wildfire response: a live-style dashboard with an incident map, discovered sources with trust levels, family status, an approval queue for agent-proposed actions, matched resources (shelters, roads, outages, etc.), evidence and audit trails, and a voice command center (Vapi) for browser sessions and call history. Household onboarding captures address, supplies, pets, accessibility, and voice consent. Demo mode runs without full backend keys; live mode targets InsForge + TinyFish + Vapi. External shelter forms stay draft-only in this MVP—no auto-submit to third-party emergency sites.

How we built it

We used Next.js 15 (App Router) and React 19 with TypeScript and Tailwind CSS 4. Client state loads from GET /api/dashboard, which prefers InsForge (Postgres + optional storage) and falls back to in-memory demo data so judges can try the UI quickly. TinyFish Search/Fetch powers wildfire source discovery and extraction. Vapi (Web + Server SDK) handles voice UX and server webhooks/tools. MapLibre + react-map-gl drive the incident map. Domain types live in one shared lib/types.ts layer; API routes under app/api/* orchestrate monitoring, refresh, approvals, check-ins, and voice config.

Challenges we ran into

Balancing realistic crisis UX with safety defaults: anything that could notify people or touch the outside world had to stay approval-gated and consent-aware. Unifying demo vs live paths so the UI never misrepresents data. Making trust and provenance visible (URLs, summaries, audit events) instead of “the model said so.” Voice adds latency, tooling, and webhook edge cases—we kept scope to an MVP we could demo reliably.

Accomplishments that we're proud of

A coherent end-to-end story: monitor → plan → voice → evidence → household, all behind one shell. Source-first thinking with a proof vault and timeline. Human-in-the-loop by design, not as an afterthought. A production-shaped stack (InsForge schema, typed APIs) that still runs in demo mode for anyone without keys.

What we learned

A coherent end-to-end story: monitor → plan → voice → evidence → household, all behind one shell. Source-first thinking with a proof vault and timeline. Human-in-the-loop by design, not as an afterthought. A production-shaped stack (InsForge schema, typed APIs) that still runs in demo mode for anyone without keys.

What's next for WorldFire Response

Deeper official feed integrations and push alerts where policy allows; RLS-hardened multi-tenant InsForge; smarter resource matching to real shelter APIs; optional Guild (or similar) for live policy traces; offline-first snippets for poor connectivity; accessibility pass and localization; pilot with real emergency partners only under their governance—not as a consumer replacement for 911.

Built With

  • guild.ai
  • insforge
  • tinyfish
  • wundergraph
Share this project:

Updates