Inspiration

Many of us, including our peers, struggle on deciding what to cook. We usually have a fridge full of items but are not sure what exactly to make with those items. This leads us to eating outside or buying even more groceries to follow along to a recipe.

  • We want to be able to use what we have
  • Reduce our waste
  • Get new and easy ideas

What it does

The user first takes a picture of the items in their fridge. They can then upload the image to our application. Using computer vision technology, we detect what are the exact items present in the picture (their fridge). After obtaining a list of the ingredients the user in their fridge this data is then passed along and processed with a database of 1000 quick and easy recipes.

How we built it

  • We designed the mobile and desktop website using Figma
  • The website was developed using JavaScript and node.js
  • We use Google Cloud Vision API to detect items in the picture
  • This list of items in then processed along a database of recipes
  • Best matching recipes are returned to the user

Challenges we ran into

We ran through a lot of difficulties and challenges while building this web app most of which we were able to overcome with help from each other and learning on the fly.

The first challenge we ran into was building and training a machine learning model to apply multi-class object detection on the images the user inputs. This is tricky as there is no proper dataset of images of vegetables, fruits, meats, condiments, other items all together. After various experiments on our own machine learning models from scratch we then attempted using multiple pre-existing models and tools for our case. We found Google Cloud Vision API was doing the best job out of all that was available. Thus, we invested in Google Vision and using their API for our prototype currently.

The second challenge was getting the correct recipes according to the data received from the artificial intelligence. We are using a database of 1000 recipes and set a threshold for the minimum of number of items needed to match (ingredients the user has - to - ingredients the recipe requires). Our assumption is the user already has the basic ingredients such as salt, pepper, salt, butter, oil, etc.

Accomplishments that we're proud of

  • Coming up with an idea that solves a problem every member of our team and many peers we interviewed face
  • Using modern artificial intelligence to solve a major part of our problem (detecting ingredients/groceries) from a given image
  • Designing a a very good looking and user-friendly UI with an excellent user-experience (quick and easy)

What we learned

Each team member learned a new or enhanced a current skill during this hackathon which is what we were here for. We learned to use newer tools, such as google cloud, figma, others to streamline our product development.

What's next for Xcellent Recipes

*We truly believe in our product and its usefulness for customers. We will continue working on Xcellent Recipes with a product launch in the future. The next steps include: *

  1. Establishing a backend server
  2. Create or obtain our own data for training a ML model for our use case
  3. Fine tune recipes
  4. Company Launch

Built With

Share this project:

Updates