The problem is 72 hours old every time.
Hurricane season in Florida isn't a surprise. We know it's coming. We've known since June. But every single time a storm makes landfall, the same chaos plays out: NOAA says one thing, the county says another, gas stations are either empty or have hour-long lines, and nobody knows which shelters are actually open until they drive there and find out.
During Milton, 6.5 million Floridians were told to prepare. The information they needed was real, updated, and available. It was just scattered across api.weather.gov, Google Maps, county emergency portals, local news, and a half-dozen Facebook groups that may or may not have been accurate. The average person had to check 4-5 sources, manually cross-reference locations, and make a decision based on incomplete data while the clock was ticking.
The technology to fix this exists. Weather APIs are free and real-time. Google knows which gas stations are open right now. AI can synthesize information and generate plans.
So I built NOTUS.
What it does
NOTUS deploys four AI agents the moment you enter your location. Each agent has a specific job, specific tools, and a specific personality. They work as a team.
Recon is the weather specialist. It pulls active alerts from the National Weather Service, fetches your local forecast down to wind speed and precipitation timing, and assigns a threat level from 1 to 5. If it finds conflicting data (a hurricane warning alongside a sunny forecast), it flags the discrepancy and re-checks before passing intel to the team.
Supply is the resource hunter. It searches Google Places for open gas stations, grocery stores, and pharmacies within 5km of your position. It knows that stations on major evacuation highways (I-275, I-75, I-4) will have the longest lines, so it deprioritizes those and surfaces side-street alternatives first.
Shelter is the safety scout. It finds community centers, schools, churches, and convention centers within 10km. It warns about proximity clustering: if three shelters are within a mile of each other, they'll all fill up at the same time. It recommends at least one dispersed option.
Dispatch is the coordinator. It waits for all three agents to report, cross-checks their recommendations for conflicts, and synthesizes one action plan: your threat level, your nearest fuel, your nearest shelter, and a clear directive with a timestamp. "Fill up by 4 PM. Evacuate north on I-275 by 6 PM."
You watch all of this happen live. The agents show their thinking in real-time, pins drop on a dark-mode map as resources are found, and the agents talk to each other like an operations team on a radio channel.
After the plan generates, you can ask follow-up questions. "What about my pets?" "Are there hospitals nearby?" "What if the bridge is closed?" Dispatch answers with full context from the original analysis.
How I built it
The agent system runs on Google ADK (Agent Development Kit) with Gemini 2.5 Flash powering all four agents. Each agent is an LlmAgent with specialized FunctionTool instances validated by Zod v4 schemas.
The tools call real APIs:
- NWS API (api.weather.gov) for alerts and forecasts. Free, no auth, real-time.
- Google Places API (New) for nearby businesses and shelters with open/closed status.
- Browser Geolocation API for instant positioning without typing a ZIP code.
The backend API route uses Server-Sent Events to stream agent updates to the frontend in real-time. As the ADK runner processes each event, it detects which agent is active, extracts map coordinates from tool responses, and pushes StreamChunk objects to the dashboard. The frontend reads with response.body.getReader() and updates React state incrementally.
The frontend is Next.js 16 with React 19 and Tailwind CSS 4. The map is fullscreen with the sidebar and action bar floating as glassmorphic panels. I built custom map markers using google.maps.OverlayView because AdvancedMarker requires a mapId that conflicts with custom dark styles. Pins spring-animate in, and the map camera pans progressively as agents report.
Deployed to Google Cloud Run via Cloud Build. Multi-stage Docker build with output: standalone. Live at notus.rnoel.dev.
Challenges I ran into
The ADK had minimal TypeScript examples. Google's Agent Development Kit is still relatively new, and TypeScript documentation was sparse compared to Python. I figured out FunctionTool constructor signatures, InMemoryRunner event streams, and sub-agent delegation by reading the SDK source code directly.
Two Google API keys that look identical but aren't. The Gemini API key from AI Studio and the Maps Platform key from Cloud Console have different permissions. My Places API calls returned 403 for over an hour before I realized the tools were using the wrong key. Once I separated them into GEMINI_API_KEY and GOOGLE_API_KEY, everything clicked.
Making the UI feel alive, not static. The first version waited 15 seconds in silence, then dumped everything on screen at once. Switching to SSE streaming and building progressive animations (shimmer loading bars, thinking messages, pin spring animations, the "Agents analyzing..." floating pill) is what turned a tool into a proper demo.
Google Maps dark mode. The mapId prop and custom styles prop can't coexist. But AdvancedMarker requires mapId. I solved this by building custom HtmlMarker components using OverlayView with React portals. Full dark theme, no compromises.
What I learned
- Multi-agent orchestration with Google ADK and how sub-agent delegation works under the hood
- Server-Sent Events for streaming AI to a React frontend without WebSockets
- Google Cloud Run deployment with Cloud Build CI/CD
- The Google Places API (New) and NWS Weather API, both new to me
Accomplishments I'm proud of
The moment that made it real was the first time I typed a ZIP code and watched all four agents light up, start talking to each other by name, and drop pins on the map one by one. The Recon agent said "Dispatch, weather assessment for Tampa Bay is ready" and I got really excited. These weren't canned responses. The agent had actually called the NWS API, parsed the data, and synthesized a threat assessment in natural language. Then Supply and Shelter fired in parallel, found real businesses, and Dispatch pulled it all together into a plan. It worked.
I'm also proud of the UI. Hackathon projects often look like hackathon projects. I wanted NOTUS to look like a product. The fullscreen dark map with floating glass panels, the spring-animated pin drops, the shimmer bars while agents think and the "Agents analyzing..." pill floating over the map.
And honestly, I'm proud that it's deployed. Not localhost. Not a screenshot. A live URL on Google Cloud Run that anyone can open on their phone right now and get a real hurricane preparedness plan for their location. That felt like shipping something real.
What's next for NOTUS
- Real-time shelter capacity from county emergency management APIs
- Traffic-aware routing that avoids congested evacuation corridors
- Push notifications when threat levels change
- Multi-state expansion beyond Florida
- Voice briefings for hands-free listening while evacuating
Built With
- gemini
- google-adk
- google-cloud-run
- google-maps
- google-places
- nextjs
- react
- sse
- tailwindcss
- typescript
- zod



Log in or sign up for Devpost to join the conversation.