As sophomores learning to live independently, we often struggle with finding time for buying groceries, choosing recipes, and cooking. Most of the time, we’ve been very confused about what to make with a limited set of ingredients and a limited budget, and so, much of our food ultimately goes to waste. As such, we were looking for a way to solve this problem sustainably. What if we had a product that limited our need to buy groceries, chose recipes for us, and simply left cooking for us to do? We wanted GreenBook to be a way to spend less energy shopping and more of that energy on making good-quality food, enabling consumers to live healthier and more sustainable lives.
What it does
GreenBook is both a physical and virtual platform that seamlessly works together in order to bring about maximum convenience to the user. The use case is as follows:
- A user comes back from their trip to the grocery store. Before placing items on their respective shelves, they place each item one by one on their GreenBook product sensor.
- The device, detecting the presence of an item, scans that item and recognizes what it is. It displays the item on the screen, along with various estimated figures; calories, expiry date (would be manually added), and so on. The user has the option to alter any of the data if needed.
- Later, the user can choose to add more items, manage their kitchen inventory, or find recipes through the screen. The kitchen inventory keeps them aware of food items that they are running out of/expiring soon.
- If the user chooses to find recipes, the screen procures a list of healthy recipes based on the ingredients available in the user’s pantry.
- The user can pick one of the recipes to read more and get to cooking!
How we built it
Hardware - The product is built from scrap acrylic, extrusion and wood (and a lot of hot glue). We use a Raspberry Pi with a Pi Cam along with an Arduino Uno module with an ultrasonic sensor. The Pi and Cam module are used to detect objects. The Arduino and ultrasonic sensor serve to guide the user to place the product correctly through feedback with a lit LED, and signal to the Pi to take an image.
Backend Software - The Pi hosts a
REST API server through which local GreenBook clients can request pictures to be taken and identified. This happens through on-board image recognition with
OpenCV, which identifies the
ArUco markers on our food items. Given the short duration of the hackathon we decided to use these markers to simplify object detection, but our full vision includes a general food object recognition system. Furthermore, the backend makes requests to image host
Cloudinary, which gives us a publicly accessible link to the images. To make the
REST API accessible over a public port, we reverse-proxied our server using
Frontend Software - The website is developed with
Tailwind for the interactive component of our submission. Using our backend API, we are able to dynamically display data from the Pi onto our website. In order to fully develop our idea, we built an extensive mockup in
Figma that works alongside the website to expand on the potential of our product.
Challenges we ran into
Connectivity - when hardware is involved, a problem always follows it: how do you extract and share its data? While we had shell access to the Pi through Serial
UART, after a lot of headbanging, we had to ask HackHarvard staff to register our Raspberry Pi onto their network (love you guys). This was needed to install packages, and make the Pi offer a full backend web service. Furthermore, the connectivity issues followed us through having to disable
CORS so that our front-end and back-end could communicate.
New Technologies - a lot of technologies used in this project were completely new to at least some of us. None of us has ever used
Figma and the majority of our team hasn’t used
Accomplishments that we're proud of
We’re really proud of our software-hardware integration. Though at many times arduous to set up, this project would not have been complete without the use of an ultrasonic sensor, arduino, Raspberry Pi, and other hardware used to bring this product to the physical world. These components do a great job of translating (not-so) elegant code to an elegant interface ^^ We managed to develop a product that connects React to Figma, Raspberry Pi to Arduino, and software to hardware all at once!
What we learned
This product pushed us to train ourselves in various new domains: from new frameworks like
React to unfamiliar grounds such as using
OpenCV on Pi. We also learned how to use various components, differing in programming and nature, in tandem to build this product (list out all the software used, say they were brought together). Finally, learning to integrate
Figma design with
React was a challenge, but we were able to come up with an innovative way of presenting an aesthetic UI along with a functional React webpage that sourced data from the Pi.
What's next for GreenBook
There are so many directions GreenBook can expand into! The most obvious improvements we can make are expanding software capabilities. Due to the time limit imposed by the competition, we ended up using
ArUco markers to differentiate between different grocery items. While this portrayed our vision well, it wasn’t very practical, and in the future, it would be great to see an
OpenCV combined with a trained model of grocery items. Additionally, other elements, such as the recipe procure, could be developed along with managing data in our inventory. Most of all, the hardware could be reduced to a single microcontroller, and the build quality could be further improved with better-quality components/parts.
Log in or sign up for Devpost to join the conversation.