NutriLens was created with the goal of designing for sustainability and a more eco-conscious future. Initially, we focused on carbon impact, but we soon realized that many users were more motivated to make sustainable choices when the healthier option was also clear. For these reasons, we explored a simple question: for people who are busy, how can we remove the need to pull out their phones and search? With NutriLens, users can simply use their AR/VR glasses to scan, compare, and instantly see which option is better for them, nutritionally and environmentally.
NutriLens is an augmented-reality lens designed for Spectacles, that detects and compares two edible items, providing users with simplified nutrition information in real time.
We developed NutriLens in using Lens Studio, starting from the open-source Depth Cache template to handle surface understanding and world anchoring. We wrote custom TypeScript scripts to process AI responses, place AR labels, and trigger interactive events like selection and feedback. We also used the Spectacles Interaction Kit to enable user interactions and adapted each component to match our nutrition comparison system.
Our team also ran into a couple of unique challenges, including learning TypeScript when most of the team had limited exposure to it prior, understanding Snapchat's API in detail, dealing with realistic constraints such as not being able to use certain products due to battery life constraints in AR glasses, or even our glasses overheating a couple minutes before the deadline! Additionally, one of the hardest problems was prompt engineering the most optimal prompt for our AI model as it would successfully get the right information about our food items but occasionally the output would overlap with each other or occasionally the AI's output would not exactly match the expected format. Gemini returns structured text, but turning that into compact, readable AR labels (while avoiding overlapping text in 3D) required several rounds of formatting and UI updates. Our item labels are created dynamically, so we couldn’t simply drag components onto them in the editor. Instead, we had to attach Interactable components and scripts programmatically, which required understanding Lens Studio’s lifecycle and APIs. Enforcing strict AI formatting, Gemini needed to output extremely strict JSON and predictable comparison text, and the smallest format mismatch would break the Lens. We spent time designing and refining prompts that were both consistent and readable. Lastly, one of the
This project was very impactful for our group, and we definitely learned a lot. The team successfully built a working real-time AR experience on Spectacles, even with the constraints of device-based interaction. We created a clean system for identifying two food/drink items and generating nutrition comparisons instantly. Additionally, we integrated TypeScript logic with Depth Cache, world anchors, and custom UI to produce readable AR labels. Our group also gained significant exposure with the Spectacles Interaction kit. Finally, we are proud to have built a full end-to-end pipeline: image → detection → AI → AR visualization → user interaction.
During this hackathon, the team learned Lens Studio and Spectacles hardware, including Depth Cache, world tracking, and interaction systems. We also learned how to use TypeScript inside Lens Studio to manage UI, object instantiation, and custom events, how to structure and enforce strict response formats when working with AI vision models, how AR object placement works in 3D space and how to avoid label overlap, and finally debugging skills as we debugged a real-time AR interactions and translate AI output into readable user-friendly UI on glasses.
Our team really enjoyed building NutriLens, and want to take the project to another level after the hackathon! We want to add gesture-based confirmation (e.g., thumbs up) when selecting the healthier option. We also want to expand nutrition details with macros, ingredients, and allergy alerts. Lastly, we want to incorporate a sustainability mode that evaluates packaging waste and carbon impact, a history option so that users can see their past healthy choices, introducing a multi-item detection system, and finally also add celebration feedback like confetti when user picks the healthier choice.
Built With
- ai
- ar
- gemini
- lens
- snapchat
- typescript
Log in or sign up for Devpost to join the conversation.