Indecision is a problem a lot of people face at meal times, and we wanted to create a way to help people decide what meals to make at home while also being able to make use of the food they already have laying around. This helps to minimize food waste, while also saving you a trip to the grocery store!

What it does

How does RecipeVision do this? Recipe Vision is a easy to use, but very powerful mobile application. You take a picture of your fridge, pantry, cabinets, etc, and our app scans it, tells you what it thinks you have, and then gives you recipes using what you have. You have the choice of taking a new picture, or using an existing picture. Take a picture, press a button, get recipes. It's that easy.

How we built it

We implemented Microsoft's Cognitive API, specifically Computer Vision, to analyze the pictures taken and give us information that we could then parse and use to find recipes. With the food information given from Microsoft's algorithms, we can then search our database for recipes that can use the ingredients you have.

Challenges we ran into

We were challenged by time constraints, so we were unable to implement a database of recipes to pull from based on the food our app detected. However, we did come up with two plans of action to tackle this. One would be to use Spoonacular's Food API, which can parse through a their huge database of recipes based on inputted ingredients.

The other option would be to create our own database using SQL on Azure or AWS and have more control over input parameters. Additionally, Microsoft's Computer Vision analysis, while covering an expansive range of categories, is not always the best fit for determining the type of food in a picture. It struggles to identifies multiple foods in the same picture, and is not always completely accurate. However, we were usually able to get a good result for a picture of a single food item.

Accomplishments that we're proud of

We are very proud that we were able to successfully feed our smartphone images to Microsoft's Cognitive services and display the resulting analysis to the user. This service has a lot of powerful potential uses, and we believe our app is very useful and practical.

What we learned

We learned a ton about Microsoft's Cognitive Services and how to implement them using Java. From this experience, we got a lot of experience working with APIs made by others and reading their code thoroughly. We all expanded our knowledge of Android Studio, and worked on UI as well. It was also amazing to see how many different APIs there were out there available for public use.

What's next for RecipeVision

We've got a lot in store. In the future, it would be great if we could implement our application onto a stationary Kinect based imaging camera mounted in the fridge and pantry, which would enable real time analysis of food stock. Another cool idea would be integration onto HoloLens, where users would be able to simply look inside their pantry and be given recipes in augmented reality based on what they are seeing.

Built With

Share this project: