Inspiration

Our quarantine hobbies included cooking/baking and we wanted to find faster ways to look for healthy recipes instead of searching the ingredients on a search engine.

What it does

The website identifies the fruit/vegetable from the picture taken and finds recipes online that use the given fruit/vegetable as the main item (i.e.apples → apple pie). The website also shows 3d models of the recipes so users can interact and see how the food looks like before baking/cooking it.

How we built it

We used Google Cloud Vision API to identify the fruit/vegetable from the image. To find healthy recipes, we used the search engine to find a recipe online that uses the food as the main item (i.e.apples → apple pie). We downloaded 3d models from Poly and uploaded them onto echoAR to make a 3D model of the food.

Challenges we ran into

The goal was for Google Cloud Vision API initially to give labels to objects(food) in the image, but the hackathon didn't sponsor Google Cloud, so we looked for other substitutes (listed below): AWS Alternate Google Cloud Vision API tutorial Classification models(Keras, CNN, Deep Forest) Jupyter Binary Classification(ex: Orange vs Apple)

We Switched several times between app and web We weren’t able to connect all our separate projects(website, AI models, echoAR 3D models) together

Accomplishments that we're proud of

We were able to work with systems (echoAR, Google Cloud Vision) that we never used before and use them onto our website.

What we learned

We learned more about echoAR, Google Cloud Vision API, and learned new concepts in HTML and python.

What's next for FruitRecipes

We plan on using Flask and Python to host on a google app engine instance. We also want to experiment more with google cloud vision API so we can embed it onto our website.

Built With

Share this project:

Updates