We wanted to give an easy solution for people to find recipes based on what they already have in their fridge or pantry. We wanted to implement ML into the app.
What it does
FridgeFusion is an app that image detects foods and adds them to a list of ingredients. Based on the given ingredients, FridgeFusion gives you recipes that you can make with what you already have.
How I built it
Prototyped with Google's vision API Using Swift Pre-trained ML model (Resnet50)
Challenges I ran into
Losing two developers in the beginning of the hackathon Converting the camera feed buffer to a UIImage
Accomplishments that I'm proud of
Working with my teammate to have a working demo Performing image detection on a video feed
What I learned
Learned how to use Google's Vision API How to import and use Apple's pre-trained ML models How to get a video stream on IOS using AVkit
What's next for Fridge Fusion
Add recipes to the app Create a cleaner UI