Inspiration

We were inspired by the prompt "intersectional environmentalism" and used a FigJam to brainstorm, which can be found here. https://www.figma.com/file/plMzp4dvbEz4BJle5VNptU/HOTH-Brainstorm?node-id=0%3A1&t=MzaVpYANYgPTSEe0-1 We started by just spewing random ideas, and eventually came up with a few comprehensive items, which you can see in yellow. We were inspired by the idea to culturally and environmentally sensitively connect people.

What it does

  • Turn your garden or kitchen into a random recipe from hundreds of cultures!
  • Share your favorite recipes you’ve made that the app has given you.
  • Use lists (or computer vision) to discover what foods are better for the environment, where you can find ethically sourced food, and find out where you can buy the best produce for your Organic Intelligence recipes!

How we built it

We first did a full design process in Figma and FigJam. Simultaneously, half of the team employed React for the frontend and Node.js for the backend to create an intuitive and responsive user interface with a scalable and reliable server-side architecture. We implemented a Python cloud function on GCP that utilizes OpenCV to extract ingredient names from images, and integrated it with the Cloud Vision API for advanced image analysis and recognition. This approach allowed us to achieve high accuracy and efficiency in ingredient detection and recognition, while also benefiting from the scalability and reliability of GCP.

Challenges we ran into

We encountered some challenges that tested our technical and problem-solving skills. These included:

  • Integrating various technologies: Our app relied on a range of different technologies, including React, Node.js, OpenCV, and the Cloud Vision API. Integrating these technologies required careful planning and coordination, particularly when it came to debugging and troubleshooting.

  • Handling edge cases: Ingredient detection and recognition can be a complex and nuanced process, particularly when it comes to unusual or exotic ingredients. We had to account for edge cases and ensure that our app could handle a wide range of ingredients and ingredient combinations.

  • Managing time constraints: This was a relatively short hackathon, only 12 hours long, which meant that we had to balance our technical work with project management and communication. We had to prioritize tasks and stay organized, while also ensuring that we were meeting our deadlines and creating a functional app.

Accomplishments that we're proud of

As a team, we achieved several accomplishments that we are proud of during the hackathon. These include:

  • Building a functional app: We successfully built an app that could accurately detect and recognize ingredients in a user's fridge or countertop and suggest recipes based on those ingredients (based on test images from Google). Our app was user-friendly, performant, and scalable.

  • Leveraging cutting-edge technologies: We leveraged a range of modern web technologies, including React, Node.js, and OpenCV, to create an innovative and impactful solution. We also utilized the Cloud Vision API and the Google Cloud Platform to improve our app's accuracy and efficiency.

  • Developing teamwork and communication skills: We collaborated effectively as a team and valued each others' opinions.

What we learned

Participating in HOTH provided us with valuable technical and non-technical skills. We gained more practical experience in building an app using modern web technologies, including React for the frontend and Node.js for the backend. We also leveraged the power of cloud computing through the Google Cloud Platform (GCP), creating a Python cloud function that utilized OpenCV and the Cloud Vision API for accurate and efficient ingredient detection and recognition.

Through our exploration of computer vision technology, we gained a deeper understanding of its real-world applications, benefits, and limitations. We tested and optimized various algorithms and techniques, learning how to adapt them to different use cases.

Our teamwork skills were also honed during the hackathon. We collaborated effectively, utilizing simple but powerful communication tools.

What's next for Organic Intelligence

Some potential next steps include:

  • Improving ingredient detection accuracy: While our app performed well during the hackathon, there is always room for improvement when it comes to ingredient detection accuracy. We could explore additional computer vision techniques or refine our existing algorithms to achieve greater accuracy.

  • Expanding recipe options: Currently, our app suggests recipes based on the ingredients that a user has available. However, we could expand our recipe database and include options for users with dietary restrictions or preferences, such as vegetarian, vegan, or gluten-free recipes.

  • Adding social features: We could integrate social features into our app, such as allowing users to share their ingredient lists or favorite recipes with friends and family. This could help to build a sense of community around the app and increase user engagement.

  • Developing a mobile app: While our app is currently web-based, we could develop a mobile app to make it more accessible to users who prefer to use their smartphones or tablets. Alternatively, we could just optimize the current web app better for mobile devices.

Share this project:

Updates