We're college students who are constantly hungry after a long day of coding, but on our college budget, we often have to make do with the ingredients that we already have in our pantry. It can be easy to get stuck into a cooking routine. We're here to break you out of that with OpenCFood!
What it does
First, take a picture of all the ingredients you have and are willing to use. Send that picture to our app, and our program will find which ingredients you have using object recognition, then look up recipes that have use the ingredients that you have. Then, Amazon Alexa will help step you through the recipe you'd like to try. It's that simple!
How we built it
The image processing is implemented through the OpenCV Python SDK. Our front-end is deployed on Google Polymer. We also use Amazon Alexa and the Spoonacular API to look up recipes.
Challenges we ran into
We had a much bigger vision for this app but we soon realized that streaming live video and AR were not realistic for our time frame. However, we pivoted many times and found that this iteration of our idea was the most interesting, so we ran with it.
Accomplishments that we're proud of
Implementing the OpenCV object recognition. That was hard.
What we learned
We learned to be ambitious in the beginning but know when to take a step back to prioritize quality over quantity. Also we learned many new technologies including Google Polymer.