Inspiration

Problems are solved when people get involved. This is why we decided to use gamification in our project so that people can stay engaged and motivated to help solve real-world issues, similar to our two sources of inspiration, Pokémon GO and FoldIt. Pokémon GO motivates people to get outside and participate in an activity and FoldIt makes contributing to a real cause fun. Combining these two main ideas is how we landed on Tumbleweed GO.

What it does

Tumbleweed GO crowdsources tumbleweed containment to the general public.

With our user-friendly mobile app, users can upload photos of tumbleweed in their neighborhood to alert authorities of rogue tumbleweeds. The tumbleweed’s location will be recorded in our database without delay. Of course, we need to validate our users’ images, so we use image recognition to filter out pictures that do not contain tumbleweeds.

Our web app displays our database data, and using wind speeds at each tumbleweed’s location from the American National Weather Service API, we approximate the locations of each tumbleweed a few days in the future. We then display this data visually in our web app, giving users access to information and the ability to take action.

How we built it

We attached a diagram of how the components of our project interact with each other to this post.

We built our mobile app using Flutter.

Our web app is built using React. Our UI library is Material UI, and we’re using OpenLayers as our world map library.

Our middleware is built with Node.js. Specifically, we’re using an Express.js server and a Firebase database. For image recognition, we are using Google Cloud Platform’s machine learning libraries. We also use the American National Weather Service API to predict tumbleweed movement.

Challenges we faced

We’ve had issues working with the Google Cloud Platform, but throughout the day (and night) we’ve learned how to properly use this tool for our project.

Another challenge we faced is that our machine learning library cannot recognize tumbleweeds. We’re working around this problem by using related objects (e.g. plants) and considering other image qualities (e.g. colour).

Accomplishments we’re proud of

This Hackathon was the first time any of us ever used image recognition, so we’re proud that we were able to learn to use something new and apply it to our project within the given time.

What we learned

We learned the following technologies during this Hackathon:

  • Flutter
  • Google Cloud Platform (Google Vision AI)
  • OpenLayers

What's next?

Moving forward, we hope to continue adding features to make our tool even more insightful and user-friendly.

Right now, our product cannot detect obstacles that tumbleweeds can run into. We might be able to include topological (elevation) data and other landscape features (e.g. lakes) in our tumbleweed prediction algorithm.

Share this project:

Updates