EasyEats is a personalized recipe app that suggests recipes based on the ingredients you have in your refrigerator.


As many of us have either seen others go through this or gone through it ourselves, we we often have difficulty in choosing which dish to cook each day. It’s a confusing task for many us. We might have tomatoes, potatoes, and egg and wonder what we should make for dinner. We have decided to solve this problem by creating a platform that allows you to simply scan the items in your refrigerator and/or pantry and receive the top recommendations for which recipes to make for your meal!

What it does

EasyEats allows the user to scan each item they have in the fridge and recommends the top recipes based on which ingredients you have to cook with. EasyEats can identify which items you have through a object detection model by simply scanning the item with your camera. Furthermore, it gives you the top recommended recipes based on the the closest search to all of the scanned ingredients.

This is the object detection interface- simply hold up a vegetable to the camera and it will detect it!

How we built it

As we moved through the Design, Build, and Implementation phases, we used different software tools. While designing the user interface, we used Bootstrap HTML for the basic layout of the web application and CSS for designing the display. In terms of the backend analysis of the data, we used Express.js, a framework for HTTP in Node.js to make a call to the Edamam API to retrieve recipes with the list of ingredients from the scanned images.

For our object-detection model, we used ML5.js and P5.js to train our object detection model for different vegetables. As a prototype, we have trained our model for a few vegetables, but in the future we can train the model to more grocery items.

Challenges we ran into

  • We decided after a couple hours, that due to time constraints, it was infeasible to try to run the object detection algorithm using tensor flow on Android. So, we have prototyped our project to a web platform.
  • Originally, we found an API that we could use to retrieve direct recipes and customize them in terms of cuisine types. However, this API needed 72 hours to be approved, so we had to use a different API.
  • Since none of us had any prior experience in object recognition, this was a task of its own! We had to research and get acquainted with the object recognition model and train it for specific vegetables.

Accomplishments that we're proud of

  • The original object detection model was not very robust and was only able to detect a carrot and a non-carrot. We were able to train the model to recognize other vegetables.
  • We successfully used new libraries, APIs, and development tools to develop a working application.
  • We’re proud to have a working product of our idea!

What we learned

  • We learned how to design and deploy an end to end web application.
  • We learned more about object recognition and how to use different libraries such as ml5.js.

What's next for EasyEats

To improve the program’s ability to detect which items are in the refrigerator, we hope to further train and test the machine learning algorithm with many different household goods. With a better trained algorithm, we hope to expand its application to be able to simply scan the refrigerator once and recognize the items.

We also want to move EasyEats into a mobile application on both Android/iOS so it’s even easier for users to scan their refrigerator with their phones.

We hope to be able to customize EasyEats even further and provide recipe recommendations based not only on the items you have but also catered to any diets or dietary restrictions users may have and wish to filter.

Share this project: