wdym86 is an AI-powered restaurant ops platform that forecasts demand with uncertainty, turns it into actionable decisions, and uses Gemini to explain the “why” in plain language—backed by real workflows (POS/BOH, tax, payments) and a tenant-safe RAG database foundation with optional pgvector embeddings) to support retrieval from SOPs/policies, menu/recipes, vendors, and other ops knowledge.
Inspiration (why we built this)
Restaurants operate like real-time logistics systems: supplier lead times, perishable inventory, demand spikes, and service pressure. In that environment, the “right answer” is rarely a single number—it’s a decision you can justify fast.
We built wdym86to solve three recurring problems:
- Variance: demand swings cause stockouts or waste.
- Decision fatigue: managers make reorder/supplier decisions under time pressure.
- Trust: recommendations that can’t be explained don’t get used.
NCR GenAI track
- Gemini is used for business-specific explanations + chat, grounded in restaurant database context (inventory, suppliers, dishes, recent orders).
- The GenAI layer is used where it adds value: communication and reasoning summaries—not replacing deterministic decision logic.
[MLH] Best Use of Gemini API
- Understand language / personalized advice: Gemini 2.5 Flash powers the AI advisor chat; all responses are grounded in your restaurant data (name, cuisine, ingredients, orders)—never generic. Function calling (6 tools: check_inventory, search_menu, get_supplier_info, get_daily_stats, get_low_stock_alerts, create_reorder_suggestion), vision (food photos, invoices), code execution (Python in chat), Google Search grounding (market prices, citations).
- Analyze like a supercomputer: Explains agent decisions, what-if scenarios, daily summaries; AI Insight Cards on Dashboard, Dishes, and Suppliers.
- Generate creative content: Code execution in chat (charts, calculations); structured output for insight cards. Makes friends say WHOA.
Best Overall Hack
- Full-stack restaurant platform: check-first POS (7 payment methods), BOHPOS, NCR Voyix BSP integration, ground-up TCN forecasting, 3 autonomous agents, Gemini 2.5 (function calling, vision, code exec, search), daily projections, timeline analytics, floor plans, delivery, payroll, staff/roles, Stripe, Solana Pay, TaxJar, 25 pages, 130+ APIs, 132 tests. One dominant system across POS, inventory, and admin.
Ground-Up Model track
- Forecasting is a pure NumPy implementation (no PyTorch/TensorFlow), including:
- TCN architecture
- Negative Binomial output ((\mu, k))
- Negative Log Likelihood loss + manual gradients
[MLH] Best Use of Solana
-Fits a consumer/restaurant product that can handle real-world payment volume alongside Stripe/card; one less bottleneck at the register.
[MLH] Best .tech domain name
-What do you mean we don't have the best domain name???
What we built (deliverables)
Core pipeline
- Forecast: probabilistic ingredient demand (mean + uncertainty)
- Decide: 3-agent pipeline (risk → reorder → supplier strategy)
- Explain: Gemini explanation for decisions + manager Q&A
Operational surfaces
- POS and BOHPOS (kitchen display)
- Payments (Stripe) and Tax (TaxJar + fallback)
How it works (implementation overview with repo pointers)
1) Forecasting: NumPy TCN → Negative Binomial parameters
- TCN:
backend/app/ml/tcn.py - Full model wrapper:
backend/app/ml/model.py - NB NLL + manual gradients:
backend/app/ml/losses.py
We model restaurant demand as count data with variance. The Negative Binomial parameterization:
- ( \mu ): expected demand
- ( k ): dispersion (controls variance)
[ \mathrm{Var}(Y) = \mu + \frac{\mu^2}{k} ]
We train by minimizing NB negative log-likelihood:
[ \mathrm{NLL}(y; \mu, k) = -\Big( \log\Gamma(y+k) - \log\Gamma(k) - \log\Gamma(y+1)
- k\log\frac{k}{k+\mu}
- y\log\frac{\mu}{k+\mu} \Big) ]
2) Decisions: 3 agents orchestrated into one auditable pipeline
- Orchestrator:
backend/app/agents/orchestrator.py - Agents:
backend/app/agents/*
Pipeline stages:
- Inventory Risk Agent: stockout probability + risk level
- Reorder Optimization Agent: what/when to order (with constraints)
- Supplier Strategy Agent: mitigation plan under disruptions
Key engineering choice: decisions are structured and persisted, so downstream explanations reflect actual computed artifacts, not vibes.
3) GenAI: Gemini explanations grounded in restaurant context
- Gemini router:
backend/app/routers/gemini.py - Prompts/client:
backend/app/gemini/*
Gemini receives a constructed restaurant context (inventory, suppliers, dishes, recent orders, and agent decision context) and produces:
- short explanations (manager-readable)
- chat responses grounded in real data
4) Real workflows: POS/BOH + tax + payments
- POS UI:
frontend/src/pages/POS.tsx - BOHPOS UI:
frontend/src/pages/BOHPOS.tsx - Tax router:
backend/app/routers/tax.py - Stripe service:
backend/app/services/stripe_service.py
These workflows are included because “intelligence” only matters if it fits into service-time operations.
What to try in the demo (fast checklist)
Forecast → Agents → Explain
- Run a forecast for an ingredient
- Run the agent pipeline (risk/reorder/strategy)
- Click Explain to get a Gemini justification
Operational proof
- Create a check in POS
- Send to BOHPOS
- (Optional) show tax calculation and payment integration surfaces
Challenges we faced (and how we handled them)
- Demo reliability: backend-down scenarios can cause infinite loading in typical demos.
- We added timeouts / fast-fail patterns and “demo-mode” fallbacks to keep an end-to-end path runnable.
- Ground-up ML engineering: without autograd, correctness and stability matter.
- We clamped parameters for stability and implemented analytical gradients for NB NLL.
- Scope control: the platform has many integrations; the hard part was a coherent end-to-end story.
- We anchored the narrative on: variance → decisions → explanations → workflow.
What we learned
- Uncertainty beats overconfident point forecasts in restaurant operations.
- Explanations are not optional—they are the adoption mechanism.
- Full-stack demos win: showing the workflow reduces judge skepticism.
Next steps
- RAG (document retrieval) end-to-end: ingestion → embeddings → retrieval → citations (we laid a RAG-ready DB foundation, but retrieval isn’t yet the primary demo path).
- Stronger offline evaluation: calibration metrics, agent outcome metrics, regression tests for explanations.
- Make NCR integrations explicit in the demo path (menu/catalog sync, transaction logs).
Built With
- css
- docker
- javascript
- mako
- numpy
- python
- typescript
- vision
Log in or sign up for Devpost to join the conversation.