Inspiration
When a hurricane hits Tampa Bay, federal relief resources take 30–90 days to reach affected communities. The pipeline runs Congress , FEMA, State, County, Communities. The people who need help most- low-income neighborhoods, elderly residents, zero-vehicle households- consistently get resources last. Not because of malice, but because the system routes to wherever is easiest to reach, which is never the most vulnerable neighborhood.
Two sponsor conversations confirmed what we suspected: the federal funding pipeline is the bottleneck, and the coordination gap between agencies is the mechanism. Nobody has a real-time picture of what's available and who needs what. Everything is done manually- phone calls, forms, email. The result is billions in waste and unquantifiable human suffering every hurricane season.
We wanted to build the layer that's missing: a system that sees everything across all channels simultaneously and makes the equity math automatic.
How we built it
We started with a capital flow analysis before writing a single line of code. We mapped the disaster relief pipeline like a supply chain: where does the money come from, where does it get stuck, and who pays the cost of the delay? That analysis told us exactly where to position the system — between state/local agencies and affected communities, sitting alongside FEMA rather than replacing it.
Challenges we ran into
Google ADK - the LoopAgent convergence logic is different from standard ML stopping criteria. We had to define our own delta threshold and learned that 0.05 was too tight for sparse resource sets with fewer than 4 active needs. We ended up adding a minimum iteration floor of 2 regardless of convergence to prevent single-iteration false positives.
CDC SVI data- the index is tract-level, not ZIP-level. We had to aggregate across census tracts per ZIP code using population-weighted averaging, which changed our equity scores by up to 0.8 points versus a naive average. This matters when the difference between ZIPs is deciding who gets resources first.
A2A communication- agents writing to the same Firestore session document created race conditions we did not anticipate. We resolved this with Firestore transactions and a session-state lock, but it taught us that multi-agent shared state requires the same concurrency discipline as any distributed system.
Operator interface design - we went through four versions of the match review UI. Every additional option we added increased cognitive load. We landed on three buttons (Accept / Modify / Skip) because in a real disaster, an operator making decisions under stress needs the interface to disappear. The AI should be doing the thinking, not the human.
The hardest problem was not technical- it was figuring out what not to build. The system deliberately does not replace FEMA's legal authority, does not handle fund disbursement, and does not require unified agency adoption. Every one of those constraints was a choice to keep the system deployable.
What we learned
No live disaster data during build- we used bundled fallback CSVs and JSON files for the CDC SVI data and sample inventories, with live API calls tested against Florida's real data in the final hours
Team coordination across 4 slices- we defined a strict interface contract before anyone wrote code (input/output types, API shapes, data models) and used Firestore as the real-time coordination layer between frontend and backend, which meant the dashboard could be built in parallel with the agents
Convincing ourselves to keep the human in the loop- early versions auto-dispatched on optimizer convergence. We pulled that back after realizing that an operator with one tap of authority is both legally safer and more trusted by the agencies we'd need to partner with
Log in or sign up for Devpost to join the conversation.