Inspiration
- Cooking at home is hard because people don’t know what to make or how to make it. We wanted to reduce friction between what’s in your fridge and what you can cook by combining computer vision, AI, and a large-scale recipe dataset into one smooth experience. Meal Snap was inspired by the idea that deciding what to cook should be as simple as taking a photo.
What it does
- Meal Snap lets users upload a photo of their fridge or ingredients and automatically detects food items using computer vision. Those detected ingredients are combined with user input to generate relevant recipe suggestions. Users can log in using their Google accounts or email/password, explore detailed recipes, and prepare to save and personalize recipes tied to their account, as well as have an AI voice assistant help along the way when making the meals.
How we built it
Frontend: Built with Next.js and React, using a modern, responsive UI with Tailwind CSS.
Authentication: Auth0 handles secure login and identity management, supporting both OAuth and email/password flows.
Computer Vision: Roboflow is used to detect ingredients from uploaded images.
AI: GeminiAPI creates recipe suggestions based off user inputs, ElevenLabs for voice instructions
Data Storage: Snowflake stores and serves a large, read-heavy recipes dataset. Supabase stores user-specific data such as user profiles and saved recipes.
Backend Architecture: Next.js server routes validate Auth0 sessions and securely map authenticated users to Supabase records and query Snowflake dataset records.
Challenges we ran into
- One of the main challenges was managing and querying a large, read-heavy recipe dataset while still supporting highly dynamic user input. Recipes live in Snowflake, which is optimized for analytics and large scans, while user interactions such as ingredient detection, free-text prompts, and future saved recipes are highly variable and time-sensitive. Designing queries that could efficiently translate noisy, real-world user input into meaningful recipe results required careful thought around data modeling, filtering, and batching requests.
Accomplishments that we're proud of
- Combined computer vision and AI to turn raw images into actionable recipe ideas.
What we learned
- We gained hands-on experience integrating multiple AI-driven components into a single production-style workflow. We learned how computer vision outputs can be noisy and probabilistic, and how important it is to design downstream systems that are resilient to imperfect predictions.
What's next for Meal Snap
- Next, we want to let users build meal plans based on nutrition goals, dietary restrictions and favourite foods. We also plan to improve ingredient detection accuracy and personalize recommendations over time. Long-term, we want to add nutritional tracking and smarter suggestions based on past cooking habits.
Built With
- auth0
- gemini
- nextjs
- snowflake
- supabase
Log in or sign up for Devpost to join the conversation.