Inspiration

I regularly come across news reports about wildfires in various parts of the United States, including the recent Maui and Canadian Wildfires, and it's truly alarming. What's particularly shocking is the fact that a staggering 85% of all wildfires in America are human-related. It's tragic to witness the devastation caused by these relentless fires, with people losing not only their homes but also their cars, properties, and even their loved ones. The consequences go beyond the immediate damage; wildfires contribute to climate change, health conditions, air pollution, burn homes and infrastructure to ashes, and decimate precious wildlife ecosystems.

Upon learning how taking out a fire before the "Golden Time" could potentially prevent millions of dollars' worth of damages and more importantly, save lives, I knew that I needed to do something.


What it does

CNN Model: Predicts whether an area is at risk of a wildfire or not. By gathering satellite images from around the world to identify wildfire susceptible areas, these regions can be narrowed down to later be tracked and monitored.

YoloV8 Model: Detects wildfires by tracking smoke. Trained on a dataset specialized only on smoke, as actual fire detection may be impractical to identify premature wildfires from a distance. The implementation of this model is focused on its usage in real-time surveillance cameras that watch over forests and wildlife.


Major Update:

Official UIs and new functionalities/tools for BOTH MODELS have been created!!!

Goal #3: Real-World Applications has been successfully completed.


Satellite Image CNN Model UI:

1. Model successfully deployed on HuggingFace: Upload a photo and you're done! Users are now able to upload and receive predictions from the Wildfire Prediction CNN model within seconds! Deployed on HuggingFace using the Gradio UI interface.

https://huggingface.co/spaces/Neural01/Wildfire_Prediction

2. Created Satellite Image Tool on Google Earth Engine: How will users use the CNN model without satellite images? No problem! Users are now able to survey ANYWHERE REGION AROUND THE WORLD with a click of a button using the Google Earth Engine Satellite tool. The image tool also displays the Vegetation Index, NVDI, and EVI values of the designated area of interest. Users can download the graph in CSV, SVG, or PNG format.

https://satellite-images-neural.projects.earthengine.app/view/interactive-satellite-image-tool

[How to use] :

  1. Find a region you would like to survey
  2. Click on the "Rectangle" button
  3. Drag your rectangle around your desired region
  4. Screenshot results (Windows: [Shift + Windows+ S] or "Print Screen" button, Mac: [Shift + Command + 4])

NOTE: For more accurate results and to prevent incomplete-image errors, please avoid making the survey regions too large.

[IMAGE DATA SOURCES]
Satellite Images: LANDSAT 9 Tier 1 TOA
Vegetation Index Table: MOD13A2.006 Terra Vegetation Indices

Inspired and built off of the following project: https://code.earthengine.google.com/fcc81cfebae226157ba7dc4e67db4851 (Found searching solutions through Stack Overflow)


2. YoloV8 Smoke Detection Model UI:

https://huggingface.co/spaces/Neural01/wildfire-detection-yolo

A new UI for the YoloV8 has been released! Users are now able to receive REAL-TIME PREDICTIONS from the Smoke Detection Model. The model can now detect smoke from Images, Videos, Webcams, YouTube Videos, and RTSP streams! The UI was built using Streamlit and is deployed on HuggingFace.

Code Source: All Credit goes to https://github.com/CodingMantras/yolov8-streamlit-detection-tracking for the inspiration of the UI.


How I built it

** CNN Model:**

  1. Preprocessing the Data: Using the Tensorflow/Keras library, the data was preprocessed using ImageDataGenerator (). Images were rescaled down to (350,350) and put into batches of 256. Normalization was applied to the images prior to prediction.

Training Data: 30250 images belonging to 2 classes. Validation Data: 6300 images belonging to 2 classes. Test Data: 6300 images belonging to 2 classes.

2.. Create the model:

Total Model Parameters: 1,822,242

Model Overview

  • Type: Basic Vanilla CNN Model
  • Loss Function: Categorical Crossentropy
  • Optimizer Function: Adam (Learning_rate= 0.0001)
  • Epochs = 5
  • Added Early stopping

For a more detailed model summary, please visit https://www.kaggle.com/ljbcoder/wildfire-prediction-vanilla-cnn

3.. Training and Evaluating the model: As expected, the model performed worse on the training data than it did on the testing data, as rotating and flipping the images made it harder for the model to train. The training accuracy came out to be around 92.5%. However, this allowed for the model to perform well on the testing data, which returned an accuracy of 94%.

4.. Deploying the Model: Exported model into an .h5 file. Used Gradio to create a UI to interact with the model.


YoloV8 Model:

  1. Import Dataset from Roboflow Found a dataset practical to use in actual surveillance cameras. The model needs to be able to detect smoke from far away.

  2. Train YoloV8 model on Google Colab: Used Google Colab because my ordinary laptop does not have a GPU compatible for ML/AI. AI model was trained in about 2 hours using T4 cloud GPU (which used up all my whole Colab GPU limit ;-;).

  3. Evaluate Model on Validation Data: Although the model was not bad at predicting images, the model loss and accuracy could be improved. The mAP50 is at a good 0.935, however, the mAP50-95 remained at 0.73 while losses remained in a mid-high range. The model needs improvement in order to predict wildfires with more accuracy.

  4. Using OpenCV to put model into action: Using OpenCV, I was able to put the model to the test to detect smoke from images, videos, and even external cameras. Although accuracy could be improved, it still did a relatively good job on detecting smoke.


Challenges I ran into

  1. GPU computing power was a BIG issue. I mainly relied on online IDE's such as Kaggle and Google Colab to train my models, but the slow training speeds made it difficult to experiment and play around with which configuration would give me the highest accuracy. Because of this, models were trained using only a bare minimum of epochs. I ultimately ended using up my Google Colab GPU limit during experimentation with the YoloV8 model, so unfortunately the YoloV8 model is not at its peak performance.

  2. Apart from using traditional CNN's, I also tried satellite image predictions by training a ResNet50 model, but it had horrible accuracy: 83%. I spent much time debugging and finding corrupted image files, and it was unfortunate that I wasn't able to use the trained model for predictions.

  3. Overfitting: As I was training my CNN model for the first time, I noticed that my model started off with a high accuracy but plateaued out only after a few epochs. With some research, I noted that this may be caused due to too many excess neurons within the AI architecture, which ultimately could cause weird predictions or overfitting. I used the Dropout layers to regulate this issue, improving my accuracy up 5%.

  4. My original plan for creating a UI for the Satellite Image CNN was to host my model on Google Cloud's VertexAI, connect it to Google Earth Engine, and receive direct predictions from the interactive map itself. However, the images generated in Google Earth Engine were in GeoTiff format, and there was no easy way of translating the GeoTiff image into a Numpy array which is used for the input of the CNN model. I would also need to preprocess the data and translate the model outputs from the model into an array or a binary output. This was a far too complex, inefficient, and costly (calls from the VertexAI API cost a lot of money) method for creating a UI. I opted to create a satellite image tool on Google Earth Engine (free) and hosted my model on HuggingFace spaces (free) for a practical and cheap alternative. The results were astounding, as it only took a few seconds to get a prediction of the desired area of interest from the model.


    Accomplishments that I'm proud of

Personal Accomplishments: First hackathon. First time training a SOTA model. First time training a CNN model. First time creating a Github repository. Tons of experimentations and debugging to make my models run. Used my theoretical knowledge and applied it to solve a major practical problem. First time learning how to deploy ML/AI models.

Project Accomplishments: CNN Model accuracy is 94%. YoloV8 model is able to predict wildfire smoke with relative accuracy. Model and code are all successfully deployed to be able to use in practical real-life situations.


What I learned

"With effort, even a modest concept can grow into a magnificent solution."

Along the way, I acquired a massive spectrum of new knowledge in AI, ranging from a plethora of image detection and segmentation models, to delving into the deeper theories and architectures of Convolutional Neural Networks (CNNs). Throughout the hackathon, I've explored innovative solutions and optimizations to enhance model accuracy, honed my critical thinking and problem-solving skills, and even mastered the art of improving my video presentation for the hackathon application. Overall, I'm feeling confident in my ability to quickly learn and translate ideas into real-world applications. I would have never dreamt of accomplishing a big project like this.

Also, my research on wildfires led me to understand new insight on viewing this problem. For example, although wildfires do burn up dead biomass allowing for fertile soil and new life, fires should be maintained and under control. Methods such as backburning can be used as an optimal solution to burn up excess biomass and regulate the health of a forest, while also preventing wildfires from occurring.


What's next for "Using AI to Predict/Detect Wildfires"

Unfortunately, I did not have enough time to train a YoloV8 model that can classify both smoke and fire at the same time (due to the Google Colab GPU limit and the time constraint). My model currently is focused on smoke detection as it best fits the current application of using it in surveillance cameras to detect wildfires, but it would be a great next step to get a better, improved model into implementation.

Goals

  1. Alert mechanism that sends message to user that a wildfire happened.
  2. Night-time smoke/fire detection
  3. Real-World Application [COMPLETE]
  4. Improve Model accuracy

Issues:

-There seems to be an error with the webcam function when running the app on HuggingFace.

-Image/video predictions can get slow, as it is a large project with a heavy model.

These errors do not persist when running on local machines. I have uploaded the code of my UI to my GitHub repo, so feel free to download and check it out yourself!

Current Progress:
[I'm currently in the process of training new YoloV8 models for Night-time smoke/fire detection. New models will be uploaded to the Streamlit App soon!]

Built With

Share this project:

Updates