Inspiration
Static diet plans are useful, but real life rarely stays static. A person might have a hard training day, a rest day, poor sleep, low energy, higher stress, or a disrupted schedule, and those changes can make a fixed plan feel out of sync almost immediately. We wanted to build something that makes a baseline diet more adaptable from day to day without trying to replace a nutrition professional. NutriFlow Agent came from that idea: keep the structure of the original plan, but help people adjust it based on what is actually happening today.
What it does
NutriFlow Agent is a multimodal daily diet planning assistant. A user uploads their baseline diet as an image or PDF, can optionally add Apple Watch, smartwatch, or health screenshots, and then provides daily context through text input. Gemini extracts the baseline diet into structured data and generates a daily adjusted plan based on the uploaded diet, the user’s context, and any optional activity screenshots.
The key product decision is that NutriFlow Agent does not invent a brand-new diet. It preserves the original diet structure and adjusts the daily planning around it. The final experience includes a baseline summary, an adjusted daily plan, analytics and dashboard views, and a PDF export.
How we built it
We built NutriFlow Agent with a React + Vite frontend and a Node + TypeScript backend. The client and server communicate over WebSockets, which keeps the interaction flow simple and responsive. For AI, we used Gemini through the Google GenAI SDK for both multimodal extraction and adjusted-plan generation.
The architecture is intentionally simple and stateless. There is no database, no authentication layer, and no complex orchestration. The backend handles uploaded inputs, calls Gemini to extract the baseline diet into structured data, then uses that structured baseline plus daily context and optional wearable screenshots to generate the adjusted plan. We deployed the project on Google Cloud Run to keep the infrastructure lightweight and easy to operate.
Challenges we ran into
One of the biggest challenges was scope control. We explored live audio and more agent-like real-time behavior, but it was not stable enough for the submission deadline. Real-time session handling introduced more complexity than we could confidently polish in time, especially when multimodal inputs and constrained generation were already core parts of the project.
That forced us to simplify aggressively and focus on what worked reliably. We had to make tradeoffs between ambition and demo readiness. In practice, that meant prioritizing strong multimodal extraction, predictable plan generation, and a clean end-to-end flow over shipping every interactive idea we initially wanted.
Accomplishments that we're proud of
We’re proud that NutriFlow Agent can take a baseline diet from an image or PDF, turn it into structured data, and use that as the basis for a daily adjustment workflow instead of generating something disconnected from the user’s original plan. That baseline-preserving behavior was important to us, and we were able to make it work in a practical way.
We’re also proud of the overall product flow. The app supports multimodal inputs, produces constrained daily planning output, presents the result in a clear dashboard-style UI, and includes PDF export. On top of that, we kept the system stateless and simple enough to deploy cleanly on Google Cloud Run, which made the project easier to reason about during the hackathon.
What we learned
We learned very quickly that constrained AI output matters more than adding more and more interaction layers. It is much more valuable to get a reliable structured extraction and a dependable adjusted-plan output than to ship a more ambitious interface that behaves inconsistently.
We also learned that multimodal input is useful even without a complicated architecture. A baseline diet upload plus optional wearable screenshots already adds meaningful context when paired with strong prompting and clear output constraints. Most importantly, we were reminded that hackathon execution is about prioritization. Reliability beats feature creep, especially when the goal is to show a complete working product.
What's next for NutriFlow Agent
The next version of NutriFlow Agent will focus on making the assistant more dynamic without losing the baseline-diet-preserving approach. Our priorities are:
- Adding a true live agent experience with both audio and text.
- Improving the interaction layer so users can talk naturally and type when needed.
- Expanding wearable and smartwatch integration beyond screenshot-based input.
- Building toward a real-time smartwatch monitoring app or companion experience with stronger real-time adaptation.
- Keeping the original diet structure intact while making the assistant more responsive to daily changes as they happen.
We see this version as a solid foundation. The next step is not to make it more complex for its own sake, but to make the interaction more natural and the adaptation more timely while staying grounded in a user’s actual baseline plan.
Built With
- apis
- cloud-services
- databases
- express.js
- frameworks
- google-cloud-run
- google-gemini-api
- google-genai-sdk
- node.js
- platforms
- react
- tailwind-css
- typescript
- vite
- websockets
- zod


Log in or sign up for Devpost to join the conversation.