Inspiration

As three senior students in high school, one of our favourite de-stressing activities is taking walks and exploring nature. Once, we were walking a trail together in Sunnybrook Park, where we encountered warning signs for "poison ivy." Motivated by this, we became dedicated to making outdoor exploring more accessible for everyone, including those who cannot see such warnings.

What Plantsight does

PlantSight is a product that combines AI with functionality to assist individuals with visual impairments by identifying and alerting them to potentially dangerous plants in their surroundings. Designed as a web app available on IOS and Android. It comes with a headband to hold phones in a way that allows the app to work as intended.

How we built Plantsight

Using the OpenCV libraries' camera, through coding via Python, we send every single frame to the Google Gemini AI, which is used to identify whether or not plants are poisonous. Then, we take the response from the Gemini AI and a sound file is played depending on the result given. Then we use the micro web framework "Flask," written in Python, to put it all into a user-friendly web app. We also have a website on Figma to optimize the user interface and user experience, allowing people to better understand our product, Plantsight.

Challenges we ran into

  • Originally used the YOLOv8 algorithm but the data set was too large to integrate into Google Codelabs. The YOLOv8 AI was taking too long to train and the error margin was too large; we ended up using a pre-trained Google Gemini model, combining it into a Flask web application
  • Our current model is too big to run on a mobile phone at the moment, it is a WIP.
  • A lot of time was wasted while attempting different approaches and scratching API's :(

What we learned

  • Brainstorm ideas and learn new tech ahead of time
  • Anticipate constantly arising issues
  • Go for smaller, simpler ideas for a short amount of coding time

Built With

Share this project:

Updates