Inspiration
HBO Silicon Valley
What it does
Allows people to take a picture of some food in their fridge and it spits out recipe ideas.
How we built it
Conceptual architecture (not implemented fully):
- React for frontend, serverless functions deployed on Vercel / Google Cloud for backend. Image queries are fed into the GCP Cloud visions API to label objects in an image and provide preliminary labels before passing data off to a tinyverse rune Edge ML model for further labelling. These labels are then passed off to the spoonacular API to provide a comprehensive list of recipes.


Log in or sign up for Devpost to join the conversation.