At the start of the 2021 school year, we moved away from home, away from all the luxuries of living with our parents. We were thrown into the adult world where we had to manage personal finances and cook for ourselves, all without prior experience. As we budgeted our meals and bought bulk foods, we always had one problem, wasting food. Whether the food spoiled, or we didn’t know what to do with a specific food item, we saw a lot of food wasted and thrown out into the garbage. This however, was not just a problem we had with our household, we saw this problem occur in many of the houses of our friends. This got us curious and we looked up statistics for food waste, and was shocked when we saw the numbers.

National Zero Waste council’s research on household food waste in Canada has revealed that almost 2.3 million tonnes of edible food is wasted each year, costing more than $21 billion.

This costs the average Canadian family $1300 per year.

On Top of the economic implications that food waste has, it also has environmental implications. The 2.3 million tonnes of avoidable food waste is equivalent to about 6.9 million tonnes of CO2 and more than 2 million cars on the road.

We decided that we needed to combat this and decided to come up with our app Chef Buddy, an app that will help you reduce food waste

What it does

Chef Buddy is a Food/Tech company with an emphasis on zero food waste.

Chef Buddy works by having the user take an image and upload it to our webapp, where it encrypts the image and goes through a machine learning and image recognition algorithm to detect types of foods/ingredients in the picture. It then searches through hundreds of thousands of recipes that can be made with the ingredients, and returns back to the user the top 6 recommendations.

How we built it

Chef Buddy uses Firebase, Google vision,Spoonacular API,OpenCV, Numpy, Pandas,Python, HTML, CSS and Javascript.

In the backend we utilized Google Firebase realtime database to store our encrypted image string in base64. We then have python retrieve the encrypted base64 string where it decrypts the image string and sends the image to Google Vision, where the image recognition machine learning model will return back to us the foods/ingredients that are within the picture. Javascript will then retrieve the data that was sent back from Google Vision and send it to the spoonacular API, the API searches through hundreds of thousands of recipes that have the food/ingredient that Google Vision gives back. We then show the top 6 recommended recipes onto the front end with HTML and CSS.

Challenges we ran into

Throughout the development journey we ran into many problems that we were able to pivot from and create effective solutions to our problems.

The first problem that we ran into was with information transferring from JavaScript to Python and vice versa. We realized that the libraries that we wanted to use to transfer data were not secure and inefficient in run time. We then pivoted and decided to create a database where we can store, retrieve and read data from each other. We decided to do this with Firebase, Google’s cloud-hosted database, which solved our problem and allowed us to efficiently transfer data to each other.

The second challenge that we ran into during our development process was food recognition, initially we decided to utilize OpenCV’s object detection and image recognition models, however we soon realize that many of the databases that are offered on kaggle could not meet our demands and after many hours of training our model it could not seem to properly return the food/ingredients from our images. We found that Google Vision would be the answer to our problems, using Google Vision’s model we were able to detect food/ingredients from our uploaded pictures and not only returned an extremely accurate prediction on the food/ingredient it also sent it back fast.

Accomplishments that we're proud of

Chef Buddy is a project that we are extremely proud of.

This was our first time working with a database and we were able to learn the fundamentals of Firebase in just a matter of hours by learning to read documentation.

Another accomplishment that we are proud of was integrating the API’s that we used and having them work with each other. Specifically being able to connect our Google Vision with the Spoonacular API. As it was our first time working with both API’s and was a problem that took us a while to solve. Connecting each of our separate parts to create the final product was an extremely relieving, exciting moment for all of us and seeing our code work with one another without any errors is the biggest accomplishment.

What we learned

  • User and client interaction through the firebase Realtime database.
  • We learned how to use the backend to create and read entries from the database
  • Learned APIs calls using Google’s open-source software called Google vision API and spoonacular to find recipes
  • Learned how to interact the front-end with backend with a JavaScript framework.

What's next for Chef Buddy

  • What's next is to further improve this project by adding better features to improve efficiency and convenience..
  • We can add features such as live scanning which can scan multiple objects in a short scanned video.
  • Another feature we can add is filters which allows the user to change what they want to see.
  • We want Chef Buddy to identify what is spoiling first in the photo.

Built With

Share this project: