Inspiration
Late markdowns become morning waste. Store teams know it, but they can’t keep up. In 2022, the world wasted about 1.05 billion tonnes of food—roughly one‑fifth of consumer‑available food (UNEP, Food Waste Index 2024). The issue isn’t awareness—it’s timing and trust. Manual markdowns are slow, inconsistent, and hard to audit. AI can help, but AI without traceability is a liability. Retailers need to show why a price changed, what data drove the decision, and who approved it. Food Rescue Map makes every AI pricing decision traceable to a Confluent Cloud Kafka event (topic / partition / offset), so decisions are fast, safe, and provable.
What It Does

Food Rescue Map turns live inventory streams into rescue offers through a fully auditable AI pipeline.
Core Flow
- Inventory events stream through Confluent Cloud Kafka
- AI agents generate safe markdown candidates
- Human‑in‑the‑loop (HIL) approval prevents runaway pricing
- Approved offers appear on the map in real time
Evidence‑First Design
Evidence is not a log. It’s a one‑click operational UI that traces: Input → Reasoning → Guardrails → Approval → Publish
Every decision displays its Kafka coordinates (topic / partition / offset), making the full chain auditable on real streaming data. Managers can filter to show only Kafka‑originated events, proving decisions flow from real inventory—not manual overrides.
Intent Tracking (Behavioral KPIs)
![]()
Beyond offers, we track engagement signals through the same Kafka infrastructure:
- Offer Views: which items attract attention
- Map Opens: navigation intent (strong purchase signal)
- Dwell Time: engagement depth per offer These metrics aggregate into per‑store conversion funnels via /api/ops/metrics.
Impact KPIs
- Waste avoided (items rescued before expiry)
- Margin uplift (revenue recovered vs. full discard)
- Time‑to‑markdown (hours saved from manual process)
How We Built It
Confluent‑first streaming architecture on Google Cloud.
Architecture Overview
Architecture

| Component | Technology | Deployment | Purpose |
|---|---|---|---|
| Frontend | React (Vite + Tailwind) + Google Maps | Firebase Hosting | Map UI, Approvals, Evidence, Impact |
| Edge API | FastAPI | Cloud Run | REST endpoints, Kafka publishing, SSE |
| Stream Worker | Python + Kafka Consumer | Cloud Run | Event processing, AI orchestration |
| Messaging | Confluent Cloud Kafka | Managed | Event streaming, Schema Registry |
| State | Firestore | Managed | Projection (Kafka is source of truth) |
| Failure Isolation | DLQ topic | Managed | Dead letter queue for failed events |
| AI | Vertex AI (Gemini 2.5 Flash Lite) | Managed | Multi‑agent reasoning with strict JSON outputs |
The 6‑Agent AI Pipeline
When inventory accumulates (item or time window), the Stream Worker auto‑triggers multi‑agent analysis:
- Demand
- Waste Risk
- Strategy
- Guardrail
- Briefing
- Policy Each agent outputs structured JSON validated against schemas, keeping downstream processing consistent and safe.
Intent Tracking Flow
Intent Tracking User interactions flow through the same Kafka infrastructure: User Action → Edge API → Kafka (frm.offers.events) → Stream Worker → Firestore event_log
Schema Registry Integration
Every event type is registered with Confluent Schema Registry (backward‑compatible versioning):
- inventory.upload.v1
- inventory.item.created.v1
- offers.candidate.v1
- offers.approved.v1
- offers.viewed.v1
- offers.map.opened.v1
- offers.view.closed.v1
Challenges We Ran Into
1. Safe Automation for Real Pricing
AI‑driven pricing is high‑stakes. A runaway discount could destroy margins.
Solution: Two‑layer control:
- AI Guardrails: hard limits on discount percentages
- Human‑in‑the‑Loop: every recommendation requires approval
2. Guaranteed Traceability
We needed every AI decision traceable to source data.
Solution: Kafka offset metadata travels through the pipeline. Evidence displays kafkaOrigin: { topic, partition, offset, timestamp }.
3. Real‑time Updates Without Polling
Users expect instant updates when offers are approved.
Solution: Server‑Sent Events (SSE) from Edge API. Frontend subscribes to /events/stream.
4. Repeatable Demo Scenarios
Real‑time systems are unpredictable during demos.
Solution: Demo replay endpoints that replay scenarios through Kafka for reliable judging.
Accomplishments We’re Proud Of
- Auto‑triggered by inventory events, with manual override for demos/ops
- Full audit trail: every decision traceable to Kafka coordinates
- Real‑time updates via SSE
- Schema‑enforced contracts via Schema Registry
- DLQ isolation for resilience
What We Learned
- Event‑driven AI removes manual bottlenecks
- Schema Registry + Kafka offsets create auditable AI
- Responsible pricing needs guardrails + HIL
- Single‑store value enables low‑risk pilots
What’s Next for Food Rescue Map
- Pilot with 2–3 stores and A/B test AI recommendations vs baseline
- Add store‑specific demand signals (seasonality, weekday, weather)
- Expand real‑time intent signals (in‑app actions, map opens, dwell time)
- Scale to multi‑store networks
Potential Impact (Hypothesis)
Target Customer: Urban bento/deli chains (30–50 items discarded per day per store) Hypothesis: AI + real‑time inventory monitoring can reduce waste by 20–30% Conservative Estimate:
- 40 items/day average discard
- 25% reduction with AI recommendations
- 30 days/month
- = 300 items saved per month per store
| Metric | Single Store | 10‑Store Chain | 100‑Store Network |
|---|---|---|---|
| Items Rescued/Month | 300 | 3,000 | 30,000 |
| Margin Recovery | +¥45,000 | +¥450,000 | +¥4,500,000 |
| CO2 Equivalent Saved | 150 kg | 1,500 kg | 15,000 kg |
Note: Numbers are hypotheses for validation. Actual impact requires pilot deployment with A/B testing.
Built With
- cloudrun
- confluentcloud
- fastapi
- firebaseauthentication
- firebasehosting
- firestore
- gemini2.5flashlite
- google-maps
- googlecloudsecretmanager
- kafka
- react
- schemaregistry
- vertexai
- vite
Log in or sign up for Devpost to join the conversation.