Inspiration

Every year, there are tens of thousands of wildfires in the United States destroying homes, displacing populations, and leaving acres of land in ruin. Efficient and accurate wildfire detection could detect these fires before they grow to uncontrollable levels. Wildfire detection largely relies on interaction to confirm or track the existence of the fire. This leads to inefficient, low coverage detection that isn't fast enough to get a response in a timely manner. We wanted to combine visual recognition machine learning to detect fires quickly, sending an alert to our custom built web application for first responders. This continuous, smart surveillance would be cheaper and provide higher coverage than the traditional alternatives.

What it does

A visual recognition AI on the Google Coral uses machine learning to classify the image it reads off the device camera. When the devices detects a wildfire, it will communicate its presence to a database. A web application reads from the database and will alert users of the wildfire. These devices could be stationed in place of watch towers for 24/7 detection and placed more frequently to cover more acreage.

How we built it

The base of our technology is the Google Coral, a machine learning, single board computer that is able to run AI algorithm to classify images remotely. We created our algorithm using TensorFlow. The Coral is outfitted with a camera to take in realtime data of its visual surroundings and send out an alert to our Azure database when it detects a fire. The database is linked to the Coral using RedNode and Watson IOT to establish the data connections, format the Coral output, and send it to database. Once it reaches the database, the information connects to a .NET MVC web application. This UI will alert users of the fire and the location, facilitating a response. The inexpensive technology could be generalized to be run algorithms to detect floods, tsunamis, or tornados by training with different datasets and stationing the device on towers, buoys, buildings, etc.

Challenges we ran into

The integration of the many different technologies was a difficulty we encountered often in our project. We all had to learn and cycle through software options.

Accomplishments that we're proud of

We are proud to get a working web application, and we are glad we learned so much during this Hackathon.

What we learned

We learned that it may be best to consolidate the platforms and streamline our technology as much as possible, i.e. using a web application, database, and training model solely from IBM cloud.

What's next for FireWatch

We want to continue to improve our detection software and application, add geolocation services, and generalize the technology to run algorithms to detect floods, tsunamis, or tornados by training with different datasets.

Share this project:
×

Updates