Inspiration

HBO Silicon Valley

What it does

Allows people to take a picture of some food in their fridge and it spits out recipe ideas.

How we built it

Conceptual architecture (not implemented fully):

  • React for frontend, serverless functions deployed on Vercel / Google Cloud for backend. Image queries are fed into the GCP Cloud visions API to label objects in an image and provide preliminary labels before passing data off to a tinyverse rune Edge ML model for further labelling. These labels are then passed off to the spoonacular API to provide a comprehensive list of recipes.

Challenges we ran into

Accomplishments that we're proud of

What we learned

What's next for SeeFood

Built With

Share this project:

Updates