Inspiration

The motivation stemmed from one of our team member's problem having to cooking the same dish for myself everyday and do not have an easy way to discover new recipes. By simply snapping a picture of the ingredients, the app retrieves a potential list of recipes that you could draw inspirations from or learn to make a new dish. Another use case is the reducing food waste, where you could make the most out of any leftover ingredients you have from your last meal. As more and more people going out to eat, instead of resorting to ordering food at a restaurant, the app allows them to see what they are able cook with ingredients that have at home. This solution also resolves the extremely time consuming process of searching up each ingredients online for a recipe and while having to identify the ingredient herself/himself.

What it does

A user takes a picture of each ingredient he/she has. It will encode the image and send it to our server which will call Azure Computer Vision AI that will analyze the image. Once the image is analyzed it will be searched throughout our database for matching ingredients or similar ingredients. All the matching ingredients, confidence and a caption of the image will be returned to the front-end (Your phone) and will be displayed in the AR environment. Once all the ingredients are “scanned”, the user is able to send the list of ingredients back to our API which will find all recipes that can use any of these ingredients. This list of recipe will contain a name, image, and a list of instructions for how to create it. This list will be displayed on the AR environment which the user can interact with and select.

How we built it

We created an API back-end using django and Graphql. We have a database which stores the ingredients and recipes. This is queried using Graphql. In addition, we use Microsoft Azure ARKit for analyzing the images and returning a JSON response consisting of what the image is. We deployed this API on Microsoft Azure App Service to host our back-end server. On the front-end, we created an iOS application using Swift on MacOS. It calls our API when it detects a touch action to capture a snapshot, which we send to Computer Vision service for image analysis. If it recognizes a ingredient, it will add to the set of recognized ingredients and search for a recipe that contains those ingredients. The name, ingredient name, and confidence is rendered in the AR environment.

Challenges we ran into

One of the biggest roadblocks we ran into is setting up the back-end API onto Microsoft Azure server, but it was quickly resolved thanks to on-site Microsoft Mentors. In addition, it was difficult coming up with an algorithm and design structure to retrieve the recipes based on the recognized ingredients. We also ran into trouble of finding an existing viable data set of recipe and ingredients.

Accomplishments that we're proud of

We were able to integrate Azure environment without any prior experience. Also, we were able to solve a common problem and encourage people to save more by creating an opportunity to cook at home.

What we learned

Drawing up a plan in the beginning decreased development downtime. Azure has a variety of services that we could employ in future projects.

What's next for ARuHungry

Introduce preferences for individual users to only return a set of recipes from recognized ingredients filtered by their set preferences. Some future expansions could be to integrate with grocery stores that want to advertise their products and suggest them to the users great deals on them depending on the existing ingredients they have.

Share this project:
×

Updates