Inspiration
When I recently started living independently with a roommate, I had to adjust to living on a much tighter budget than before. Our pantry and fridge often had limited ingredients, and I found myself cooking the same meals repeatedly—not because better options weren’t possible, but because I didn’t know what else I could make with what I already had.
At the same time, I am someone who goes to the gym and actively tries to stay fit. I realized that even with inexpensive ingredients, it should be possible to cook meals that are both healthy and aligned with fitness goals—but discovering those meals required time, planning, and nutritional knowledge that most people don’t have.
I discussed this problem with my father, who runs a hospital, and he mentioned that many of his patients struggle with the same issue. Patients are often advised to follow specific diets, but they don’t know how to prepare suitable meals using the ingredients they already have at home. This made it clear that the problem wasn’t just personal—it was widespread.
That’s when the idea was born: an intelligent assistant that understands what food you have, your health goals, and your dietary needs, and then helps you decide what to cook next.
What it does
Our application is an AI-powered food and nutrition assistant that helps users make smarter meal decisions based on what they already have at home.
Users can:
- Scan or upload images of their fridge or pantry
- Automatically detect food items using AI vision
- Track inventory with estimated expiry dates
- Generate personalized recipes based on available ingredients
- Receive meal recommendations aligned with fitness and health goals
- Log meals and track daily calories and macronutrients
All data is stored locally on the user’s device, ensuring privacy and offline functionality, while Google Gemini is used for intelligent analysis and recommendations.
How we built it
This project is a purely frontend React application built with a service-oriented architecture.
State management and persistence are handled using custom React hooks and the browser’s localStorage API. This allows the app to work offline and retain user data between sessions without requiring accounts or external databases.
The intelligence layer is powered by Google Gemini:
- Gemini Vision is used to analyze fridge and pantry images
- Gemini Text models generate recipes, meal plans, and nutritional insights
- AI responses are structured as JSON to integrate cleanly into the UI
Key architectural components:
- Custom hooks for inventory and health data management
- A Gemini service layer for all AI interactions
- Modular UI pages for scanning, recipes, and health tracking
The system was designed to keep data local, fast, and privacy-first, while still leveraging powerful cloud AI for reasoning and recommendations.
Challenges we ran into
One of our biggest challenges was realizing that traditional recipe APIs prioritize ingredient matching over user intent. This often led to recommendations that ignored dietary preferences, nutritional goals, or health constraints.
To overcome this, we moved away from rigid recipe databases and adopted a generative AI approach. Using Gemini allowed us to treat dietary preferences and health goals as hard constraints rather than soft filters.
Another challenge was designing an architecture that didn’t rely on external databases. Ensuring data persistence, consistency, and performance using only localStorage required careful state synchronization and validation.
Finally, structuring AI responses into predictable JSON formats was critical to maintaining reliability across the UI.
Accomplishments that we're proud of
- Built a fully functional AI-powered nutrition assistant without a backend database
- Successfully integrated image-based food recognition using Gemini Vision
- Designed a privacy-first architecture where user data never leaves the device
- Created meaningful, health-aware recipe recommendations instead of generic suggestions
- Delivered a polished, end-to-end product within a hackathon timeline
What we learned
We learned that recommendation quality depends more on reasoning than data volume. Generative AI allowed us to prioritize intent, constraints, and personalization in ways traditional APIs could not.
We also gained hands-on experience designing scalable frontend architectures, managing persistent client-side state, and structuring AI outputs for real-world applications.
Most importantly, we learned how to bridge everyday human problems with AI-driven solutions that feel practical and intuitive.
What's next for the project
Our long-term vision is to evolve this application beyond a traditional web interface and integrate it directly into smart kitchen environments.
We aim to explore integration with smart refrigerators that feature built-in screens, internal cameras, and sensor systems. By combining image recognition, weight sensors, and inventory tracking, the system could automatically detect what items are inside the fridge without requiring manual input.
In this future version, the fridge itself would:
- Continuously track available ingredients
- Detect item quantity and freshness
- Update inventory in real time
- Provide on-screen meal recommendations and health insights
This would allow users to receive personalized, health-aware meal suggestions directly from their fridge, making healthy decision-making seamless and frictionless.
Additional future directions include:
- Advanced personalization based on long-term eating habits
- Support for medical and recovery-focused diets
- Multi-device synchronization across households
Ultimately, we envision this project becoming an intelligent kitchen assistant that helps reduce food waste, improve health outcomes, and simplify everyday cooking decisions.
Built With
- css
- gemini-vision
- google-gemini-api
- html
- localstorage-api
- react
- typescript
Log in or sign up for Devpost to join the conversation.