Inspiration

We asked a dangerous question:

What if competitive eating were treated like a high-frequency trading system?

Watching legends like Joey Chestnut dominate contests, we realized the gap wasn’t just stomach size—it was optimization. Most competitors rely on instinct. We built a system that replaces instinct with real-time decision intelligence.

We were also inspired by control systems in aerospace and finance—why shouldn’t eating panchos have feedback loops, predictive modeling, and adaptive strategies?


What it does

“Moneyball for hot dogs”


What it does

We built PanchoMoneyball, a live AI-powered dashboard that turns competitive eating into a data-driven sport.

In real time, competitors see:

  • Panchos per minute (PPM)
  • Projected final total
  • Fatigue-adjusted pace
  • AI coaching prompts

At the core, we model performance as a live-updating projection:

[ \hat{P}(T) = \int_0^t r(\tau), d\tau + (T - t)\cdot \hat{r}(t) ]

Where:

  • ( \hat{P}(T) ) = predicted final panchos
  • ( r(t) ) = current eating rate
  • ( \hat{r}(t) ) = fatigue-adjusted future rate

We tell you not just how you're doing—but how you're going to finish.


Live Dashboard

The interface is designed like a trading terminal for food:

  • Main metric: [ \text{PPM} = \frac{\text{panchos eaten}}{\text{minutes}} ]

  • Momentum indicator: M(t) = \frac{d}{dt} r(t)

  • Endgame trigger: When: [ T - t < \delta ] → system switches to MAX OUTPUT MODE


AI Coach (in your pocket)

A lightweight mobile AI acts as a real-time performance coach, giving short, high-impact commands:

  • “Hydrate. Maintain rhythm.”
  • “You’re 2 behind projection—reduce chew time.”
  • “Fatigue spike detected. Stabilize pace.”
  • “Final minute: override limits.”

The logic is based on:

[ r_{optimal}(t) = r(t) + k \cdot (P_{target} - \hat{P}(t)) ]

  • If you’re behind, it pushes you harder.
  • If you’re burning out, it stabilizes you.

The Moneyball Insight

Inspired by Moneyball, we asked:

What if winning isn’t about eating more… but optimizing better?

Traditional competitors rely on instinct. We quantify everything:

  • Pace inefficiency
  • Over-chewing penalties
  • Hydration timing impact
  • Micro slowdowns that cost wins

We discovered:

[ \text{Winning} \neq \text{Max Speed} ]

[ \text{Winning} = \max \int_0^T r(t), dt \quad \text{under constraints} ]


Everyone else is just eating.

We are:

  • Measuring
  • Predicting
  • Optimizing
  • Adapting

In real time.


Walkthrough of what Moneyball does

A competitor starts strong → dashboard shows high PPM Midway → fatigue detected → AI says: “Slow chew, hydrate” Final minute → screen flashes: “+3 TO WIN”

They push.

They win.


Pancho.ai is a real-time competitive eating optimizer that maximizes panchos consumed per unit time.

At its core, we model performance as:

[ P(t) = \int_0^T r(t),dt ]

Where:

  • ( P(t) ) = total panchos consumed
  • ( r(t) ) = eating rate (panchos/min)
  • ( T ) = competition duration

The AI continuously adjusts ( r(t) ) based on fatigue, fullness, and time remaining.

We also model stomach capacity dynamics:

[ C(t) = C_0 + \alpha \cdot \log(1 + t) - \beta \cdot F(t) ]

Where:

  • ( C(t) ) = available capacity
  • ( F(t) ) = fatigue function
  • ( \alpha, \beta ) = adaptation coefficients

We predict when you’ll slow down before it happens—and correct it in real time.

Features:

  • Live pace tracking (panchos/minute)
  • AI voice coach: “Hydrate. Reduce chew cycles. Maintain rhythm.”
  • Predictive finish score
  • Technique switching (dunk vs dry, split vs whole)

How we built it

We combined:

  • A lightweight mobile interface for real-time input
  • A backend ML model trained on simulated eating curves
  • Rule-based overrides for safety and edge cases

Our system behaves like a feedback controller:

[ r_{optimal}(t) = r_{base} \cdot e^{-\lambda F(t)} + \gamma \cdot (T - t) ]

Where:

  • Early phase → controlled aggression
  • Mid phase → efficiency stabilization
  • Final phase → all-out burst

We also implemented a greedy optimization layer:

[ \max \sum_{i=1}^{n} p_i \quad \text{subject to} \quad C(t) \leq C_{max} ]

Meaning: Eat as many panchos as possible without triggering shutdown.


Challenges we ran into

  • Human body ≠ deterministic system Our model assumed smooth decay, reality gave us chaos.

  • Latency vs reaction time By the time the AI says “slow down,” you’re already suffering.

  • Data scarcity Shockingly, there is no public dataset of “panchos eaten over time.”

  • The fullness wall At some point:

[ \lim_{t \to T} r(t) \rightarrow 0 ]

And no model can save you.


Accomplishments that we're proud of

  • Turned eating into a quantifiable system
  • Built a working real-time optimizer in under a hackathon
  • Created a model that adapts mid-competition
  • Achieved a theoretical improvement:

[ \Delta P \approx +18% \text{ over naive pacing} ]

  • Most importantly: We made judges say, “Wait… this is actually genius.”

What we learned

  • Optimization beats brute force
  • Pacing is everything:

[ r(t) > r_{max} \Rightarrow \text{early burnout} ]

  • The real constraint isn’t speed—it’s capacity management
  • Humans are noisy systems, not clean equations
  • And most importantly: The difference between winning and losing is often strategy, not effort

What's next for hot dog eating boosted with AI

We’re just getting started.

Next iterations include:

Reinforcement Learning Eaters

Train agents to discover optimal strategies:

[ \pi^*(s) = \arg\max \mathbb{E}[P(t)] ]


Biofeedback Integration


internal state awareness beyond visible performance

** A Better Model**


The high-value biometric layer (realistic + defensible)

1. Cardiovascular strain → fatigue predictor

  • Heart rate (HR)
  • Heart rate variability (HRV)

extend fatigue function:

[ F(t) = w_1 \cdot \text{HR}(t) + w_2 \cdot (1 - \text{HRV}(t)) + w_3 \cdot t ]

detect impending collapse before pace drops.


2. Breathing + vagal stress → choke risk

  • Respiratory rate
  • Breath irregularity

New constraint layer:

[ r(t) \leq r_{\text{safe}}(t) = f(\text{respiration}, \text{HR}) ]

optimizing speed; avoiding catastrophic failure events.


3. Gastric load estimation (this is your “leptin substitute”)

You can approximate fullness dynamically:

[ G(t) = \int_0^t v(\tau),d\tau - k \cdot \text{emptying}(t) ]

Where:

  • ( v(t) ) = ingestion rate
  • emptying ≈ slow, but non-zero

This becomes your real constraint:

[ r(t) \rightarrow 0 \quad \text{as} \quad G(t) \rightarrow G_{max} ]

true wall predictor.


4. Micro-movement / chew detection

Using accelerometer or jaw sensors:

  • Chew cycles per bite
  • Swallow latency

You quantify something no competitor tracks:

[ \text{Efficiency} = \frac{\text{Panchos}}{\text{Chew Cycles}} ]


“We optimize what you do.”

“We will optimize what your body can sustain in real time.”

Feature

A “Wall Prediction Index”

[ W(t) = P(\text{rate collapse in next } 60s) ]

Driven by:

  • HR spike
  • HRV drop
  • Gastric load estimate
  • Momentum decay

AI coach:

“Wall in 45 seconds if pace continues. Reduce intake velocity by 12%.”

competitive advantage.


Pushing the frontier

  • Pre-competition optimization (nutrition timing, stomach training)
  • Personalized pacing curves learned over time
  • Digital twin of the eater:

[ \text{Human}_{model} \approx f(\text{history}, \text{biometrics}, \text{strategy}) ]

The first physiological control system for human performance in extreme consumption


Vision

  • Model internal state

A legitimate high-performance system… just applied to a very unusual sport.



🤖 Autonomous Eating Assist Systems

Yes, we’re thinking robotic augmentation (ethically questionable, competitively dominant)


Global Pancho Leaderboard

Standardized metrics:

[ \text{EPM} = \frac{\text{Panchos}}{\text{Minute}} ]


Final Vision

Turn competitive eating into a data-driven sport where:

  • Every bite is optimized
  • Every second is modeled
  • Every competitor becomes a system

In the end, this isn’t about eating.

It’s about answering one question:

[ \max_{human} ; \text{Panchos} ]

And we’re getting closer to the limit.

Built With

  • hot-meal
  • jaw-va
  • pie-thin
Share this project:

Updates

posted an update


The Ultimate end goal is to shift the core concept from:

Maximize panchos consumed

To be inclusive + sustainable:

Optimize human performance under health, safety, and access constraints

Mathematically, objective becomes:

[ \max \int_0^T r(t),dt \quad \text{subject to:} ] [ \text{Health}(t) \geq H_{min}, \quad \text{Access} \geq A_{min}, \quad \text{Waste} \leq W_{max} ]

That one change unlocks alignment with multiple SDGs.


1. Health-first (SDG 3: Good Health & Well-being)

Competitive eating is inherently extreme—so you flip the narrative:

Add a Health Constraint Layer

Instead of “override limits,” AI enforces:

  • Safe heart rate ceilings
  • Choking risk thresholds
  • Digestive stress limits

system becomes:

[ r_{safe}(t) = \min(r_{optimal}(t), r_{health}(t)) ]

Product shift

  • AI coach says: “Reduce pace—risk threshold exceeded.”
  • Auto-throttle instead of pure maximization

preventing harm


2. Inclusivity (SDG 10: Reduced Inequalities)

Right now, this favors:

  • Elite eaters
  • People with specific body types
  • Access to training/data

Fix that with adaptive baselines

Instead of comparing everyone to absolute output:

[ \text{Performance Score} = \frac{P(t)}{P_{personal_baseline}} ]

This allows:

  • Smaller competitors
  • Beginners
  • Different body types

to compete meaningfully.

Add modes:

  • Beginner mode → pacing + safety coaching
  • Accessibility mode → slower cadence, alternative metrics
  • Non-competitive mode → skill training (chew efficiency, rhythm)

3. Responsible consumption (SDG 12)

Introduce a Waste Efficiency Metric

[ \text{Efficiency} = \frac{\text{Consumed}}{\text{Prepared Food}} ]

Track:

  • Food wasted
  • Over-ordering
  • Leftovers

AI coaching shift:

  • “You’ve exceeded optimal intake—continuing increases waste risk.”
  • “Redistribute remaining food.” > Optimizing consumption efficiency—not excess

4. Data ethics + accessibility (SDG 9 + 16)

Principles:

  • No required wearables → optional, not mandatory
  • On-device processing → minimize data sharing
  • Transparent models → explain why advice is given

Add:

  • “Why this recommendation?” button
  • Local-only mode (no cloud)

Now you’re aligned with ethical AI deployment, not just performance tech.


5. Expand beyond competitive eating

real-time human optimization engine.

A. Nutrition training

  • Healthy pacing
  • Portion awareness
  • Mindful eating

B. Clinical / recovery use

  • Eating disorder recovery pacing (carefully, with experts)
  • Post-surgery intake monitoring

C. Food security contexts

  • Optimize caloric intake under constraints
  • Efficient distribution modeling
  • SDG 2 (Zero Hunger)
  • SDG 3 (Health)

6. Redefine the “win condition”

Not this:

[ \max \text{Panchos} ]

this:

[ \max \left( \text{Performance} \times \text{Safety} \times \text{Efficiency} \right) ]

Where:

  • Safety penalizes risky behavior
  • Efficiency penalizes waste
  • Performance is normalized per individual

“A real-time system for optimizing human consumption safely, efficiently, and inclusively.”


“How do we optimize nutrition delivery when resources are scarce?”


A powerful alternative direction

1. Nutrition optimization under constraint

system:

[ \max \text{Nutritional Intake} \quad \text{subject to limited food supply} ]

track:

  • Calories absorbed
  • Micronutrient density
  • Satiety efficiency

2. Application in food aid systems

Think:

  • Refugee camps
  • Disaster relief
  • School feeding programs

The AI could help:

  • Allocate food more efficiently
  • Reduce waste
  • Personalize portions based on need

3. Ethical guardrails (non-negotiable)

  • No targeting vulnerable populations as participants
  • No incentives that encourage overconsumption or harm
  • Clear health safeguards
  • Partnerships with credible orgs (NGOs, public health groups)

The deeper truth

Efficiency matters most when resources are scarce.

But the way to act on that is:

  • Protect people in scarcity
  • Optimize systems around them

—not turn them into the system.


a real-time human nutrition optimization platform for constrained environments

Align with global goals—and still keep the “optimization + feedback loop” DNA that makes project interesting.


What you’re proposing is essentially:

A real-time control system for human nutrition under constraints

Core System:

Mission

Optimize nutritional outcomes per unit resource in real time.


1. The Objective Function

maximizing nutrition efficiency:

[ \max \int_0^T N(t),dt ]

Where:

  • ( N(t) ) = nutritional value absorbed per unit time

Subject to:

[ R(t) \leq R_{available}, \quad H(t) \geq H_{safe} ]

  • ( R(t) ) = resource consumption (food, water)
  • ( H(t) ) = health state (must remain safe)

2. Real-Time State Model (the “digital twin”)

Each person becomes a dynamic system:

[ S(t) = {\text{energy}, \text{hydration}, \text{micronutrients}, \text{stress}} ]

estimate state using:

Inputs (low-cost, scalable)

  • Age, weight, sex
  • Recent food intake
  • Simple symptoms (fatigue, dizziness)
  • Optional:

    • Heart rate (cheap wearables)
    • MUAC (mid-upper arm circumference, used in malnutrition screening)

3. Nutrition Value Function

Not all calories are equal.

[ N(t) = \sum_i w_i \cdot n_i(t) ]

Where:

  • ( n_i ) = nutrients (calories, protein, iron, vitamin A, etc.)
  • ( w_i ) = priority weights based on deficiency risk

Example:

  • Malnourished child → protein + micronutrients weighted higher
  • Dehydrated adult → fluids weighted higher

4. Real-Time Allocation Engine

This is “AI coach,” but for survival and recovery:

[ a^*(t) = \arg\max \mathbb{E}[N(t) \mid S(t), R(t)] ]

Outputs:

  • What to eat
  • How much
  • When

5. Scarcity Optimization

Instead of optimizing one person; optimize a population:

[ \max \sum_{j=1}^{m} N_j(t) ]

Subject to:

[ \sum_{j=1}^{m} R_j(t) \leq R_{total} ]

  • Food distribution in refugee camps
  • Disaster response
  • School meal optimization

6. Real-Time Feedback Loop

Same idea as PanchoMoneyball, but reframed:

Dashboard shows:

  • Nutritional intake rate
  • Deficiency risk
  • Time-to-stabilization

AI prompts:

  • “Prioritize protein source now.”
  • “Hydration deficit detected.”
  • “Iron intake insufficient—adjust meal composition.”

7. Practical Deployment Environments

A. Humanitarian Aid (highest impact)

Partner with orgs like:

  • World Food Programme
  • UNICEF

Use cases:

  • Refugee camps
  • Famine zones
  • Emergency feeding centers

B. Schools in low-resource areas

  • Optimize meal programs
  • Track nutritional outcomes over time

C. Disaster response

  • Allocate limited supplies dynamically
  • Prevent both underfeeding and waste

8. Hardware Strategy

Tier 1 (baseline – scalable)

  • Smartphone app
  • Manual input
  • Visual guides

Tier 2 (enhanced)

  • Basic wearables (heart rate)
  • Portable MUAC tape

Tier 3 (advanced, optional)

  • Smart utensils / portion estimation
  • Computer vision for food tracking

9. non-negotiable Ethical Guardrails

  • No coercion or forced optimization
  • Human override always available
  • Transparent recommendations
  • Local cultural food integration

for alignment with:

  • United Nations Development Programme
  • SDG 2 (Zero Hunger)
  • SDG 3 (Health)

10. What makes this actually novel

A lot of systems:

  • Track nutrition (static)
  • Plan meals (offline)

A closed-loop, real-time nutrition control system under resource constraints


Inputs:

  • User profile
  • Food available list
  • Meal intake logging

Outputs:

  • “Best next meal” recommendation
  • Daily nutrition score
  • Deficiency alerts

Core model:

Greedy optimization:

[ \max \frac{N}{R} ]

(“most nutrition per unit resource”)


12. Long-term vision

“We don’t just track food. We optimize how humans convert limited resources into survival and recovery.”


The strategic insight

original system optimized:

throughput (how fast you eat)

This system optimizes:

outcomes (how well you survive and recover)

Same math. Completely different impact.


Log in or sign up for Devpost to join the conversation.