Inspiration Our team has always been passionate about machine learning, so we were thrilled to take on the challenge of building a project that incorporated multiple aspects of AI. This hackathon gave us the perfect opportunity to bring our skills together in an exciting, innovative way.

What it does Our mobile app allows users to take a picture of their fridge, pantry, or food items, and from there, it does the heavy lifting. The image is analyzed using a computer vision model that detects the ingredients. These ingredients are cross-referenced with a large recipe dataset of over 5,000 recipes. Based on the user’s dietary restrictions and preferences, the app then generates a recipe, complete with healthy ingredient substitutions. This helps users, particularly those with dietary needs or restrictions, find easy meals they can prepare with what they have.

How we built it The front end is built with Swift and SwiftUI to provide an intuitive user interface. The app interfaces with a Flask-based backend hosted in the cloud. When an image is uploaded, the backend sends it to a custom-trained Azure Computer Vision model that identifies the ingredients present. The backend then searches for relevant recipes by comparing the detected ingredients to a large recipe dataset. Using retrieval-augmented generation (RAG), the ingredients, missing items, and recipe context are provided to an LLM (ChatGPT), which formulates a healthy recipe based on the user’s dietary restrictions. This entire process is powered by API calls and JSON data exchanges.

Challenges we ran into We encountered challenges like handling GitHub merge conflicts, training the computer vision model with accurate labeled data, dealing with oversized image files in API requests, and managing the complexities of setting up a seamless AI pipeline. Additionally, we faced the challenge of continuing development after losing a team member with limited time left.

Accomplishments that we're proud of We’re proud of successfully setting up the entire pipeline ourselves without relying on external frameworks like LangChain. We integrated the Spoonacular API to embed recipe data and used it to power our LLM model with context for RAG-based ingredient substitutions. Learning Swift in a short period and applying it to complete a project at our first hackathon was a significant achievement for the team.

What we learned We gained valuable experience working with Swift, Azure Cognitive Services, and setting up an end-to-end AI pipeline. The project also taught us how to manage complex system integrations and API handling.

What's next for Pantry Pilot We plan to continue refining and training our computer vision model for greater accuracy and implement additional features to cater to a wider and more diverse audience. We aim to make the app more inclusive and versatile for users with various dietary preferences and restrictions.

Built With

Share this project:

Updates