Inspiration
"Fridge Blindness" is a silent epidemic. It’s 6 PM on a Tuesday. You open the fridge, stare at a drawer full of produce, and feel two things: decision fatigue and guilt. You aren't sure if that pepper is still safe to eat, and you lack the mental energy to find a recipe for it. So, you close the door and order takeout.
We are driven by the goal to end this cycle. 30-40% of the US food supply is food waste, much of which happens at the household level due to confusion over freshness and lack of immediate utility.
Standard computer vision sees "a banana." But that doesn’t help you. We were inspired to build Mind Your Bite to answer the real human questions: "Is this banana still good?" and "What can I make with it right now?"
We built this for the Snap Best Use of Spatial AI track because we believe AR shouldn't just overlay data; it should reason about our environment. We also targeted the Sustainability track to demonstrate how experiential tech can act as a supportive, judgment-free companion that reduces waste by meeting users exactly where they are—hands-free, inside their kitchen.
What it does
When you put on the Spectacles and look inside your refrigerator, Mind Your Bite transforms from a passive observer into an active kitchen assistant.
Visual Reasoning: You look at an item (like a shriveled pepper) and engage the microphone.
Multimodal Analysis: The system doesn't just identify the object; it analyzes visual freshness cues—color changes, spots, bruising, or mold.
The "Use Loop": It returns a minimalist AR overlay with three key data points:
Identification: The item name.
State: A clear categorization of Fresh, Use Soon, or Likely Spoiled, with a one-line reason (e.g., "Skin is wrinkling").
Action: A generated recipe or storage tip specific to that item's condition (e.g., "Perfect for a stir-fry tonight").
If the AI is unsure due to lighting or occlusion, it prioritizes safety, displaying "Not Sure" and asking you to adjust your view.
How we built it
Mind Your Bite is a native Spectacles experience built in Lens Studio.
1. Large Language Model Integration: We integrated the Gemini API using a Remote Service Module to perform image-based reasoning. On explicit user intent, a single lightweight camera frame is sent for inference, returning structured judgments on food identity, visible condition, and recommended action. This keeps latency low while signaling practical LLM integration.
2. Prompting, Evaluation, and Guardrails: We developed the system prompt through iterative testing and evaluation. Over multiple revisions, we tightened scope and precision by adding guardrails that restrict analysis to food items only and require the model to clearly return “Not Sure” when visual confidence is low.
3. Application Logic and Latency Handling: We used TypeScript to manage application state, intent handling, and response flow. Drawing from Snapchat’s Spectacles GitHub examples, we implemented proven UI patterns for latency masking. Inference is user-triggered by design, with a custom UI flow that keeps the experience engaging while results are generated.
Challenges we ran into
1. Prompt Engineering for Safety and Precision: Balancing usefulness with restraint required careful iteration. The main challenge was ensuring the system avoided overconfidence by ignoring non-food objects and explicitly surfacing uncertainty when visual signals were ambiguous.
2. Real-World Visual Conditions: Refrigerators present difficult environments with low light, clutter, and occlusion. Achieving consistent behavior across these conditions required repeated testing and refinement of confidence thresholds to prevent unreliable outputs.
Accomplishments that we're proud of
1. Proof of concept: A working Spatial AI prototype that connects live vision, large language model reasoning, and immediate, action-oriented guidance inside a real kitchen environment.
2. Safety-first evaluation: A conservative decision framework that prioritizes uncertainty and user trust over speculative outputs.
3. Real-world robustness: A responsive experience that performs reliably despite low light, clutter, and occlusion in everyday refrigerators.
What we learned
1. Context-driven decision making: Food decisions are not isolated judgments but depend on environment, timing, and surrounding items. This reinforced that Spatial AI is most valuable when it easons about context and intent together, rather than treating objects as standalone inputs.
2. Feature expansion opportunities: Building the prototype expanded our thinking beyond freshness checks into multi-item reasoning, spatial persistence, and habit-forming interactions. These insights shaped a clearer roadmap for features that can drive adoption while advancing the goal of reducing household food waste.
What's next for Mind Your Bite - Spatial AI for everyday food decisions
Our current proof of concept connects vision to intelligence for single items. Our future vision leverages the full power of the Spectacles:
Spatial Anchoring & World Mesh: Moving from single-item scanning to a continuous scan mode, where freshness tags are persistently anchored to items in 3D space across the fridge shelf using the World Mesh.
Multi-Object Reasoning: Scanning a whole shelf to suggest a recipe based on multiple items that need to be used soon (e.g., "You have spinach and eggs expiring; make an omelet.").
Social Integration: Sharing a "Use Soon" list with roommates via Snap to coordinate dinner plans.
Built With
- claude
- figma
- javascript
- lens-studio
- snap-spectacles
- typescript

Log in or sign up for Devpost to join the conversation.