Inspiration
We asked a dangerous question:
What if competitive eating were treated like a high-frequency trading system?
Watching legends like Joey Chestnut dominate contests, we realized the gap wasn’t just stomach size—it was optimization. Most competitors rely on instinct. We built a system that replaces instinct with real-time decision intelligence.
We were also inspired by control systems in aerospace and finance—why shouldn’t eating panchos have feedback loops, predictive modeling, and adaptive strategies?
What it does
“Moneyball for hot dogs”
What it does
We built PanchoMoneyball, a live AI-powered dashboard that turns competitive eating into a data-driven sport.
In real time, competitors see:
- Panchos per minute (PPM)
- Projected final total
- Fatigue-adjusted pace
- AI coaching prompts
At the core, we model performance as a live-updating projection:
[ \hat{P}(T) = \int_0^t r(\tau), d\tau + (T - t)\cdot \hat{r}(t) ]
Where:
- ( \hat{P}(T) ) = predicted final panchos
- ( r(t) ) = current eating rate
- ( \hat{r}(t) ) = fatigue-adjusted future rate
We tell you not just how you're doing—but how you're going to finish.
Live Dashboard
The interface is designed like a trading terminal for food:
Main metric: [ \text{PPM} = \frac{\text{panchos eaten}}{\text{minutes}} ]
Momentum indicator: M(t) = \frac{d}{dt} r(t)
Endgame trigger: When: [ T - t < \delta ] → system switches to MAX OUTPUT MODE
AI Coach (in your pocket)
A lightweight mobile AI acts as a real-time performance coach, giving short, high-impact commands:
- “Hydrate. Maintain rhythm.”
- “You’re 2 behind projection—reduce chew time.”
- “Fatigue spike detected. Stabilize pace.”
- “Final minute: override limits.”
The logic is based on:
[ r_{optimal}(t) = r(t) + k \cdot (P_{target} - \hat{P}(t)) ]
- If you’re behind, it pushes you harder.
- If you’re burning out, it stabilizes you.
The Moneyball Insight
Inspired by Moneyball, we asked:
What if winning isn’t about eating more… but optimizing better?
Traditional competitors rely on instinct. We quantify everything:
- Pace inefficiency
- Over-chewing penalties
- Hydration timing impact
- Micro slowdowns that cost wins
We discovered:
[ \text{Winning} \neq \text{Max Speed} ]
[ \text{Winning} = \max \int_0^T r(t), dt \quad \text{under constraints} ]
Everyone else is just eating.
We are:
- Measuring
- Predicting
- Optimizing
- Adapting
In real time.
Walkthrough of what Moneyball does
A competitor starts strong → dashboard shows high PPM Midway → fatigue detected → AI says: “Slow chew, hydrate” Final minute → screen flashes: “+3 TO WIN”
They push.
They win.
Pancho.ai is a real-time competitive eating optimizer that maximizes panchos consumed per unit time.
At its core, we model performance as:
[ P(t) = \int_0^T r(t),dt ]
Where:
- ( P(t) ) = total panchos consumed
- ( r(t) ) = eating rate (panchos/min)
- ( T ) = competition duration
The AI continuously adjusts ( r(t) ) based on fatigue, fullness, and time remaining.
We also model stomach capacity dynamics:
[ C(t) = C_0 + \alpha \cdot \log(1 + t) - \beta \cdot F(t) ]
Where:
- ( C(t) ) = available capacity
- ( F(t) ) = fatigue function
- ( \alpha, \beta ) = adaptation coefficients
We predict when you’ll slow down before it happens—and correct it in real time.
Features:
- Live pace tracking (panchos/minute)
- AI voice coach: “Hydrate. Reduce chew cycles. Maintain rhythm.”
- Predictive finish score
- Technique switching (dunk vs dry, split vs whole)
How we built it
We combined:
- A lightweight mobile interface for real-time input
- A backend ML model trained on simulated eating curves
- Rule-based overrides for safety and edge cases
Our system behaves like a feedback controller:
[ r_{optimal}(t) = r_{base} \cdot e^{-\lambda F(t)} + \gamma \cdot (T - t) ]
Where:
- Early phase → controlled aggression
- Mid phase → efficiency stabilization
- Final phase → all-out burst
We also implemented a greedy optimization layer:
[ \max \sum_{i=1}^{n} p_i \quad \text{subject to} \quad C(t) \leq C_{max} ]
Meaning: Eat as many panchos as possible without triggering shutdown.
Challenges we ran into
Human body ≠ deterministic system Our model assumed smooth decay, reality gave us chaos.
Latency vs reaction time By the time the AI says “slow down,” you’re already suffering.
Data scarcity Shockingly, there is no public dataset of “panchos eaten over time.”
The fullness wall At some point:
[ \lim_{t \to T} r(t) \rightarrow 0 ]
And no model can save you.
Accomplishments that we're proud of
- Turned eating into a quantifiable system
- Built a working real-time optimizer in under a hackathon
- Created a model that adapts mid-competition
- Achieved a theoretical improvement:
[ \Delta P \approx +18% \text{ over naive pacing} ]
- Most importantly: We made judges say, “Wait… this is actually genius.”
What we learned
- Optimization beats brute force
- Pacing is everything:
[ r(t) > r_{max} \Rightarrow \text{early burnout} ]
- The real constraint isn’t speed—it’s capacity management
- Humans are noisy systems, not clean equations
- And most importantly: The difference between winning and losing is often strategy, not effort
What's next for hot dog eating boosted with AI
We’re just getting started.
Next iterations include:
Reinforcement Learning Eaters
Train agents to discover optimal strategies:
[ \pi^*(s) = \arg\max \mathbb{E}[P(t)] ]
Biofeedback Integration
internal state awareness beyond visible performance
** A Better Model**
The high-value biometric layer (realistic + defensible)
1. Cardiovascular strain → fatigue predictor
- Heart rate (HR)
- Heart rate variability (HRV)
extend fatigue function:
[ F(t) = w_1 \cdot \text{HR}(t) + w_2 \cdot (1 - \text{HRV}(t)) + w_3 \cdot t ]
detect impending collapse before pace drops.
2. Breathing + vagal stress → choke risk
- Respiratory rate
- Breath irregularity
New constraint layer:
[ r(t) \leq r_{\text{safe}}(t) = f(\text{respiration}, \text{HR}) ]
optimizing speed; avoiding catastrophic failure events.
3. Gastric load estimation (this is your “leptin substitute”)
You can approximate fullness dynamically:
[ G(t) = \int_0^t v(\tau),d\tau - k \cdot \text{emptying}(t) ]
Where:
- ( v(t) ) = ingestion rate
- emptying ≈ slow, but non-zero
This becomes your real constraint:
[ r(t) \rightarrow 0 \quad \text{as} \quad G(t) \rightarrow G_{max} ]
true wall predictor.
4. Micro-movement / chew detection
Using accelerometer or jaw sensors:
- Chew cycles per bite
- Swallow latency
You quantify something no competitor tracks:
[ \text{Efficiency} = \frac{\text{Panchos}}{\text{Chew Cycles}} ]
“We optimize what you do.”
“We will optimize what your body can sustain in real time.”
Feature
A “Wall Prediction Index”
[ W(t) = P(\text{rate collapse in next } 60s) ]
Driven by:
- HR spike
- HRV drop
- Gastric load estimate
- Momentum decay
AI coach:
“Wall in 45 seconds if pace continues. Reduce intake velocity by 12%.”
competitive advantage.
Pushing the frontier
- Pre-competition optimization (nutrition timing, stomach training)
- Personalized pacing curves learned over time
- Digital twin of the eater:
[ \text{Human}_{model} \approx f(\text{history}, \text{biometrics}, \text{strategy}) ]
The first physiological control system for human performance in extreme consumption
Vision
- Model internal state
A legitimate high-performance system… just applied to a very unusual sport.
🤖 Autonomous Eating Assist Systems
Yes, we’re thinking robotic augmentation (ethically questionable, competitively dominant)
Global Pancho Leaderboard
Standardized metrics:
[ \text{EPM} = \frac{\text{Panchos}}{\text{Minute}} ]
Final Vision
Turn competitive eating into a data-driven sport where:
- Every bite is optimized
- Every second is modeled
- Every competitor becomes a system
In the end, this isn’t about eating.
It’s about answering one question:
[ \max_{human} ; \text{Panchos} ]
And we’re getting closer to the limit.
Built With
- hot-meal
- jaw-va
- pie-thin
Log in or sign up for Devpost to join the conversation.