Inspiration
When we first moved out, we discovered that our knowledge regarding cooking was severely lacking. Most of the time, we didn't know what we had in the fridge, and when we did know, we had no idea what we should make. We ended up resorting to takeaway, leaving the formerly-fresh fruits and veggies in our fridges to rot in silence.
NOT ANYMORE! With VeggieVision, we can help boost our food knowledge and help minimise food waste.
What it does
Using the camera, you can take a photo of the mystery vegetable in your fridge and our AI backend will tell you what that vegetable is. In addition, if you want to try out unsupported vegetables or you don't want to take a photo, you can use the convenient search bar.
Our system will tell you what that vegetable is, and how you can go from boring vegetables to fun and filling dishes, perfect for cooks at any level.
How we built it
For the front end, we started with creating a design system in Figma, first creating low-fidelity mockups to explore different UI options followed by a final mid-fidelity design decided through thorough deliberation among group members. Afterwards, using TailwindCSS and NextJS, we were able to build a functioning front-end designed with mobile devices in mind. We used Inkscape to create the logo, and Vercel to deploy our front-end for free.
For the back-end, we started by training our model using Tensorflow with a large vegetable image dataset sourced from Kaggle. We then evaluated our model to get an astonishing 99.9% accuracy rate, and then we wrote a Flask server (with ngrok tunnelling) to facilitate communication between our NextJS front-end and our model. This Flask server is served through a VM hosted by Google Cloud.
Challenges we ran into
In our first day, we had to spend considerable time and resource into training and evaluating our model. In addition, our front-end design went through non-stop iteration, in order to create the most visually appealing and the most consistent experience for our users.
Throughout the final day, we faced considerable challenge regarding CORS and communication between the front-end and the back-end, but after thorough investigation, we were able to resolve the problem.
Accomplishments that we're proud of
Being able to produce a working prototype in 48 hours is an accomplishment we are proud of. In addition, we are very proud of:
- being able to train an image recognition model in such a short timeframe without sacrificing accuracy
- being able to design and develop a fully-functioning frontend within a short timeframe
- being able to speed up and optimise our code's performance to help smoothen out the user experience.
What we learned
We learned how to work with Tensorflow, ngrok and Google Cloud. We also took this opportunity to reacquaint ourselves with NextJS and its various recent changes, and we learned how to successfully connect a front-end to a back-end, despite the fact that the front-end and the back-end were set on different devices and different URLs.
What's next for VeggieVision
We plan to expand the number of plants supported by VeggieVision, and we also plan to implement detection of stale/fresh vegetables -- after all, who wants to throw away perfectly good vegetables, and who wants to accidentally eat bad vegetables?
We also plan to implement recognition and support of multiple vegetables at a time, and even further uses in the real world. This includes the usage as an attachment to a food storage unit such as a fridge or freezer. This is significant as we can expand upon the design of a refrigerator by enabling it to become a convenient decision maker in the meal planning process.
Built With
- figma
- flask
- google-cloud
- inkscape
- kaggle
- keras
- nextjs
- ngrok
- premiere-pro
- tailwindcss
- tensorflow
- vercel
Log in or sign up for Devpost to join the conversation.