Inspiration
As someone who often stares at their fridge. Deciding what they're going to eat takes 1 to 10 minutes.
What it does
You upload an image of your fridge and add the description of what taste you're currently feeling. Once complete, the AI agent's process will commence. The AI summarizes your description and ingredients, transfers that info to another AI to give a recipe, then it will talk to an AI that is programmed to judge the food if it is the right quality the user is asking, if not, Cooking AI and Food Adviser AI will go back and forward until a recipe is found.
How we built it
We used Fetch AI(uAgent), Groq, Genimini 2.5 flash, Postman, FastAPI, React with Tailwind, and Vite to build this project. Genimini 2.5 flash we use to scan the photo for ingredients. React with Tailwind and Vite as the frontend framework. Fetch AI and Groq to create an AI Agent to complete our goal.
Challenges we ran into
The challenge we ran into was that initially, we had an idea with YOLO for this project; however, we tried to implement YOLO, but it wouldn't work. The next challenge we came across was connecting our backend with our AI agents server, to was a difficult task. The big challenge was getting a good internet connection while we were working on our project.
Accomplishments that we're proud of
We finally got our AI agents all talking together, and we have two that are talking until a condition is met.
What we learned
We learned
What's next for HungryFlow
HungryFlow's next step will be adding voice recognition to allow user to input their text by speech, add a database for food, and

Log in or sign up for Devpost to join the conversation.