As students, things constantly get lost and forgotten bout in the abyss that is our fridges, leading to empty stomachs and hurt wallets. We decided to help fix that.
What it does
Our web app allows us to submit a photo of the inside of our fridge and return a list of the contents as well as possible recipes based on the ingredients available.
How we built it
The GUI is a React framework which uploads a file to our flask endpoint. The flask endpoint processes image using google vision and identifies elements in the picture. Thanks to web scraping using beautiful soup python library, we have recipes and the ingredients we contain in our database. We cross-check which ingredients are available and conclude on the possible recipes. The endpoint responds to the request from the front-end with all the ingredients and the URL for all possible recipe.
Challenges we ran into
- Linking our python endpoint to react web-app.
- Having too many elements in a photo causes google vision to return generic tags such as 'food'.
- Uploading a file from the web-app to our endpoint.
Accomplishments that we're proud of
An interesting combination of different technologies in our stack particularly the use of GCP for AI.
What we learned
We learned how to use GCP, the difficulties of
What's next for fridgenator
To get more accurate readings on ingredients' available, we might need to do recursive "scan" of the image. It would also be worth exploring quantities available per ingredient. We would also try to remove duplicates, maybe add ratings for the food recipes to know which would be better to do.