Inspiration

We thought about how difficult it was to decide what to eat at home with the endless amount of possibilities of recipes to cook, why not create an app that solves just that problem for us? So, instead of checking the fridge and turning around to order a take-out, why not take a picture of your fridge for a machine to solve that problem for you?

What it does

This web application takes the user’s captured photo of the ingredients available. It then uploads the image onto GCP and ,using a pre-trained machine learning CNN algorithm, detects and classifies individual food images in the taken photo. The string value the machine learning model outputs is then shown in a list that the user is able to append. Finally the Spoonacular ‘find recipes by ingredients’ API is called in order to output recipe ideas for users. You can add more ingredients you have that were not in the photo. You now have the recipe ideas!

How I built it

This web application was built using Python and Node.js, React, HTML, CSS, C++. For the front end, we made a login and a sign-up page that let users sign-up with their phone numbers. The phone numbers were then verified with Vonage API by sending a PIN code to the number provided. The capture page was built with C++ and React. The backend was created with the Tensorflow hub object detection machine learning algorithm that used CNN pretrained models as well as the Google vision image recognition API.

Challenges I ran into

Our team ran into a problem with the machine learning algorithm. We originally were simply going to call the Google vision API, however, the API was painfully inaccurate in detecting and classifying images in the food format in a refrigerator. This API was only accurate in detecting individual images, and we wanted to create a solution that made this app simple to use. Taking multiple images of only one food item at a time is most definitely not simple. Our team tried splitting the image into 80 sections (the optimal solution for the approximate distance a person would stand from a fridge to take a picture), but that still was not accurate enough.

Accomplishments that I'm proud of

We did research all throughout the 2nd day of the hackathon and with trial and error found a solution that was viable. Our team used Tensorflow-hub object detection CNN pretrained algorithm and classified the image. This algorithm was incredibly accurate and gave us the ability to accurately classify 95% of the images.

What I learned

Our team learned to fall forward. We ran into many challenges and errors in creating this large project in 36 hours. There were excruciating times when code wouldn’t execute but our team pushed forward to complete the project before the deadline. Although the motivation wasn’t enough to get the project completed, the hard work and dedication that our team had committed was what got the job done.

What's next for RecipeGO

There were many ideas and features that we would’ve liked to add to our project but due to the time constraint, we postponed some features that might be added in the future. We would add a feature that would enable the application to provide types of recipes (desserts, vegetarian, etc) suited to users’ dietary needs or restrictions. We would add a feature to allow users to save up to 15 recipes in the dashboard as favourites.

Built With

Share this project:

Updates