The Story of OMIREACH (Simulated Workforce Edition)
💡 Inspiration
The inspiration for OMIREACH comes from a critical gap in disaster relief: The Analysis-to-Action Gap. While we have plenty of AI that can "summarize" a disaster, we lack systems that can "solve" one. We wanted to move beyond the chatbot and build a simulated end-to-end workforce. By creating a high-fidelity agentic ecosystem, we can test how autonomous agents observe live global incidents and theoretically execute complex robotic logistics without human bottlenecks.
🛠️ How We Built It
OMIREACH is a simulated "Workforce for Good" built on a modular, agentic architecture.
- The Brain (Google ADK & A2A): We utilized the Google Agent Development Kit to create distinct worker lanes. The system doesn't just "talk"; it dispatches work. We used
ParallelAgentfor real-time data enrichment andLoopAgentfor iterative verification of simulated tasks. - The Workflow (Queue-Based Boundaries): To ensure the simulation mirrors real-world service architecture, we built explicit worker lanes like
sentinel-observerandrobotics-worker. This decouples the "Sensing" (GDACS/USGS API) from the "Doing" (Kit Assembly). - Physical Reasoning (Simulated Robotics): We integrated a Robotics-Worker lane that translates high-level logistical needs into simulated pick plans. This allows the agent to reason through "Workspace Inventory" and design specialization-specific kits (e.g., medical vs. food) based on the incident context.
- The Tech Stack: Built with Next.js for the operator UI and Node.js for the orchestration layer, utilizing the Gemini API for the core reasoning and the Google Maps API for live geospatial visualization of incident zones.
📐 The Logic of Logistics
To ensure the Logistics agent creates efficient simulated plans, we factor in distance $d$, weather risk $\omega$, and urgency $\mu$. The efficiency of a proposed mission $M$ is calculated as:
$$Efficiency(M) = \frac{\mu}{\sum (d_i \cdot \omega_i)}$$
This allows the Triage agent to autonomously choose the highest-impact path when multiple incidents are observed by the Sentinel.
🚧 Challenges We Faced
- Simulating Autonomy: The hardest part was ensuring the agents could "spawn" missions without a human clicking "Start." We had to build a
Coordinatorthat handles event ordering and completion rules so the workforce stays in sync. - Handling Messy Data: Moving from a "seeded demo" to live GDACS and USGS feeds meant our agents had to learn to normalize noisy, real-world data into clean "Incident Packets" for the simulation to process.
- The "A2A" Hand-off: Ensuring a seamless hand-off between a "Data Agent" (Intel) and a "Mechanical Agent" (Assembly) required precise tool-calling definitions to prevent the simulation from breaking during complex logistical chains.
🧠 What We Learned
Building OMIREACH taught us that the future of disaster response lies in Agentic Orchestration. We learned that by defining clear worker boundaries, we can create an AI workforce that is more than the sum of its parts. Even in a simulated environment, the ability of an agent to reason through a "Wicked Problem" and generate a structured, operational outbox for a real-world hand-off is a massive step toward autonomous relief.
Log in or sign up for Devpost to join the conversation.