FoodFrame — Mixed-Reality Nutrition Coach
Demo access: website username: prakash
Short Summary
Main Idea: Turn food tracking into a spatial, hands-free experience: see macros in real time, get goal-aware coaching, and log meals with a tap or a voice cue.
Outlook / Vision: Always-on AR companion that recognizes foods around you, overlays trustworthy nutrition labels, and nudges you toward targets—at home, at work, or while shopping.
What We Implemented (Quest 3): Live food recognition, instant macro overlays, a 1-line coach tip, and one-tap/voice logging to an external tracker. Spatial widgets (rings, coach card, recent log) persist via anchors.
Full Write-up
Inspiration
Traditional tracking breaks the flow of eating—searching, typing, guesstimating portions. FoodFrame brings the numbers and coaching into your field of view so logging and better choices feel effortless.
What It Does
Live Food Tracking
- Visual detection of common foods; per-item and meal totals (kcal, carbs, fats, protein; optional fiber/water).
- Portion assumptions shown transparently; when uncertain, display a range and ask for quick confirmation.
Personalized Coaching
- Suggestions aligned with your diet profile and today’s progress (e.g., “Add ~20 g protein to hit target”).
- Safer alternatives when scanning high-calorie items; never guess allergens—say “I don’t know” when unclear.
Seamless Logging
- “Log this” by voice or tap; syncs macros to your connected tracking app.
- Meal context (time, pre/post-workout) enhances recommendations.
Spatial Interface
- Floating labels near dishes, daily rings in your space, and a compact coach card.
- Widgets persist across sessions using spatial anchors.
How It’s Implemented
- Engine & MR: Unity on Quest 3 with Passthrough, Scene Understanding, Spatial Anchors, Interaction SDK (hand-first).
- Food Detection: Lightweight on-device models for fast item detection; label → canonical food mapping.
- Portion Estimation: Heuristics + size references; user confirmation for edge cases.
- Nutrition Mapping: Standardized food IDs → nutrition database; unit normalization (g/ml).
- Inference Path: Local for low latency; optional cloud assist for harder cases.
- Voice & Sync: Voice commands, REST/WebSocket backend, and external tracker integration; PWA mirror for rings/logs.
Challenges (Condensed)
- Hands-only UX: Reliable pinch/hold without cluttering UI.
- Portion accuracy: Lighting/occlusion → use ranges + confirm.
- Anchor robustness: Keep widgets stable across rooms/sessions.
Accomplishments
- End-to-end MR loop: scan → label → total → coach → log.
- Persistent spatial UI that reappears exactly where you left it.
- Near-real-time headset ↔ mobile sync for rings and logs.
What We Learned
- In MR, tiny, timely tips outperform long advice.
- Showing assumptions builds trust and faster confirmations.
- Clean module boundaries (Detect → Normalize → Map → Coach → Log) speed team iteration.
What’s Next
- Depth-aware portioning for better volume→mass estimates.
- Ingredient-level parsing for mixed/home-cooked meals.
- Buffet mode warnings and quantified swap suggestions.
- Restaurant menu/barcode import; offline resilience.
- Privacy-respecting shared meals and dietician collaboration.
Built With
Unity (Quest 3), Meta XR (Passthrough, Scene Understanding, Spatial Anchors, Interaction SDK), C#, lightweight vision models, nutrition DB, REST/WebSocket backend, PWA dashboard.



Log in or sign up for Devpost to join the conversation.