From the Amazon Rainforest to the cornfields of Iowa, few places on Earth have been spared from the spread of invasive species. But how can one with an untrained eye tell the difference between the green leaves of two plants? What if we created a trained eye, one that goes where you go, whose skills can be called up on demand to report whether the plant is friend or foe? Thus, InvasiveID was born.

What it does

Using Tensorflow, we trained our project to identify plants so they can be classified as either invasive or native (non-invasive). A user can feed an image of the plant into our algorithm, and it will identify the plant and tell you whether that plant poses a danger to the local ecosystem. Once an invasive species has been identified, you can report the location of the plant so its spread can be tracked and the plant can be destroyed.

How we built it

We began by focusing on Iowan wildlife to gain a better understanding of what the most common locally invasive species are. We implemented an automated system of compiling images of both invasive and native plant species from the Center for Invasive Species and Ecosystem Health and the USDA Natural Resources Conservation Service, respectively. We then assigned proper labels and conducted data cleansing in preparation for classification.

After we compiled the dataset, we trained our TensorFlow convolutional neural network in Google Colab. Our dataset included 5 invasive plants common to Iowa and some non-invasive plants to balance them out. We trained the model to around 75% test accuracy (at which point we started experiencing overfitting) and then exported it in the TensorFlowJS Layers format. We built a mobile app using React Native and loaded in the model with the TensorFlowJS library. This lets us make inferences without needing a network connection, suitable for use in the outdoors where spotty cell connection could be an issue.

Challenges we ran into

  • Although some of our team members had experience with React Native for frontend development, none of us had designed and implemented an entire app before. This was a fun opportunity to learn more about frontend and learn some new skills.

  • We weren't able to find easily available datasets with large amounts of images of invasive species so we were forced to compile a dataset ourselves. Kudos to Joseph for his work automating and curating our dataset.

  • Plant images are actually pretty tough to differentiate well (many of them share similar features and only have small, nuanced differences). Although our model got decent performance, improving it to be more accurate and nuanced would likely take more research and data collection.

Accomplishments that we're proud of

  • Making a decent-looking app on our own for the first time :)

  • Video production

What we learned

  • Plants can be hard to differentiate just by image (other features might be helpful in the future)

  • Designing interfaces and visualizations for mobile needs different considerations than designing for web

What's next for InvasiveID

We wanted to start small with our project, focusing on Iowa where we all live. Our next step is to broaden our horizons to the world, so that no matter where you go, InvasiveID can quickly and accurately detect ecological threats right where you are standing.

Built With

Share this project: