Inspiration

Every day in Monterrey — and across Latin America — tons of prepared food expire in malls, restaurants, and food courts while migrant shelters, elderly homes, and orphanages just 2 blocks away struggle to feed their beneficiaries. The coordination is manual, chaotic, and takes 2–4 hours. By then, the food is gone.

We asked: what if an AI agent could close that gap in seconds?

What it does

cosecha_urbana_ai is a 5-step LangGraph agent built on Elasticsearch that fully automates food surplus redistribution:

  1. INGEST — receives a surplus alert via webhook (mall/restaurant reports excess food)
  2. ANALYZE — ES|QL queries calculate an urgency score in real time based on expiry time and food category
  3. MATCH — combines Geo Search (proximity filter) + Groq LLM reasoning to select the optimal recipient (shelter, migrant house, orphanage)
  4. EXECUTE — records the donation in Elasticsearch, deactivates the alert, and sends notifications to donor and recipient via Slack (Kibana Connector)
  5. VALIDATE — verifies match quality, logs the full pipeline result, and updates Kibana analytics

Real run result: 15 kg of prepared food from Plaza Fiesta San Agustín matched to Casa del Migrante San Pedro — 0.93 km away, 92% match score — in 11.2 seconds.

How we built it

  • Elasticsearch Cloud (GCP) — 4 indices: donors, recipients, food_alerts, donations_history. Mappings with geo_point, dense_vector (1536 dims for kNN), and full Spanish text analysis
  • ES|QL — real-time urgency scoring queries that run directly on the agent's ANALYZE node without leaving the Elastic stack
  • Geo Searchgeo_distance queries filter recipients within a configurable radius (15 km default), sorted by proximity
  • LangGraph (StateGraph) — 5-node directed graph with conditional edges and immutable state between nodes
  • Groq llama-3.3-70b — LLM reasoning for the final recipient selection step, running at low temperature (0.1) for deterministic matching
  • FastAPI — async REST API with Pydantic v2 models and lifespan-managed ES client
  • Kibana — live dashboard tracking KG rescued, donations coordinated, people served, avg distance, and urgency level distribution
  • Kibana Connector — Elastic-native Slack integration for donor/recipient notifications (no external webhook needed)
  • uv — modern Python package management

Challenges we ran into

The hardest challenge was designing the LangGraph StateGraph so each node was idempotent and failure-safe — errors are captured in state["errors"] rather than thrown as exceptions, so the graph always reaches VALIDATE even if a step partially fails.

Getting ES|QL to work inside an async agent node required careful handling of the Elasticsearch async client and query formatting. The real-time urgency scoring query needed to be both fast (<500ms) and expressive enough to factor in food category weights and concurrent alert counts.

Configuring the Kibana Connector as the notification channel (instead of a raw Slack webhook) required deep-diving into Kibana's Actions API — but the result is a fully Elastic-native notification layer with zero external dependencies beyond Slack itself.

Accomplishments that we're proud of

  • 11.2 seconds end-to-end: from surplus alert to Slack notification — vs 2–4 hours manually (1285× faster)
  • 🗺️ 0.93 km average distance between donor and matched recipient in pilot data
  • 📊 Live Kibana dashboard updating in real time as the agent runs: 30→45 KG rescued, 240→360 people served, during the demo itself
  • 🤖 5-node LangGraph pipeline with ES|QL + Geo Search + LLM reasoning all working together in a single coherent state machine
  • 🔔 Kibana Connector → Slack notifications: Elastic-native, zero external dependencies

What we learned

ES|QL is remarkably powerful for agent pipelines. Being able to run urgency scoring, priority ranking, and pattern queries directly on Elasticsearch — without pulling data out to process in Python — dramatically reduced agent complexity and latency. The EVAL + STATS combination is particularly useful for computed scores.

We also learned that combining Geo Search with LLM reasoning produces far better matches than either alone. The geo filter ensures logistical viability; the LLM adds semantic judgment about need level, food compatibility, and beneficiary count.

Finally: LangGraph's conditional edges are the right abstraction for multi-step agents with failure modes. Designing the graph first (on paper) before writing node code saved hours of debugging.

What's next for Cosecha Urbana

  • Vector Search (kNN) — embed food descriptions and recipient needs for fully semantic matching (mappings already in place with 1536-dim dense_vector fields)
  • Elastic Agent integration for automated donor onboarding
  • Route optimization — multi-stop donation routing when multiple shelters need the same surplus
  • Monterrey pilot — 5 malls, 8 shelters, targeting 500 kg/week rescued
  • WhatsApp Business API or APP in Android/IPhone — notifications directly to coordinators' phones

Built With

Share this project:

Updates