Inspiration

For Canada 2023 had statistically the most destructive wildfire season with 15 million hectares of land having been burnt. Not only Canada, around the world forest fires have been a rising issue causing countless damages to not only homes and people in high fire risk areas but also is taking a massive dent in the world's forests. Current methods of wildfire detection are insufficient for fast response to remote areas. Around the world, forest fire detection relies on civilian detection and reporting. The downfall of this ends up being uninhabited areas around the world lack the necessary monitoring to detect and put out forest fires before they reach a less-manageable state. Part of sustainability is to reduce or even stop the harm done to the environment to maintain a livable state for not only us but all other animals on earth. Project-Firefly aims to tackle the problem of easy, early, and most importantly accessible detection of forest fires in remote areas.

Current common forest fire detection methods include: Civilian reporting, Lookout stations, ground and air patrols, and more recently have started included implementation of Computer database predictions and custom fire detection sensors placed along forests. These are all great strategies however when it comes to remote areas, detection by humans isn’t a common occurrence during early stages of a fire, computer prediction is only a stepping stone and sensor implementation can be costly and tedious to set up.

What it does

Project firefly aims to be a solution to early forest fire detection by combining all common forest fire detection methods into one easy platform. Not only is it a resource for civilians to report forest fires through image and location submission, but it is also an interface for drone detection systems. All this data becomes useful to help authorities understand the spread and location of fires but also provides an early warning system for civilians within approaching wildfire zones.

Firefly drone detection utilises basic camera integrated drones to identify and report fires not only solving the issue of needing constant personale patrol over large remote areas, but also lets civilians be able to easier and more impactfully contribute to fire detection to help the areas around them. Firefly drones capabilities can be done with any camera integrated drone able to stream in RTMP allowing connection to our custom servers and fire detection models.

Challenges we ran into

Since Project Firefly was a product produced within Hack The Valley 9 within the limited time frame, we weren’t able to implement all the features we would have liked but were able to produce working CV models onto a DJI mini drone allowing it to detect fire from a source as small as a lighter from 20m away.

What's next for FIREFL.AI

Ideally we want to further the reliability of fire detection through the addition of temperature sensors to help verify the existence of a forest fire and a gps module to be able to output more in depth data regarding active fires. Unfortunately we didn’t have access to a gps module, however, we were able to create a proof of concept using a temperature module, rotary encoder, and ESP32 allowing us to make custom “gps” inputs and read and react to temperature data. Depending on the increase in average temperature over a period of time it would be able to be registered as a fire and outline the “exact coordinates” of where that was detected. The idea was to be able to cross reference this with the camera footage to make the fire detection more reliable and data more useful.

In terms of the user interface, we wanted to further expand the capabilities and usefulness of the web app, not only as a medium to report fires and find resources, but also based on current reported fires from user submissions and drone detection be able to calculate approximately how fast and where the fire would spread to given time so that the web app could also serve as an alert system to warn users giving them more time to prepare for disaster.

In the end we want Project Firefly to be able to fulfil the potential of helping sustain the forest population of areas at risk for wildfires. Ideally this would let forest fires be detected within early stages reducing the harm done to the area.

How we built it

Here’s a more detailed version, expanding on the fire detection system: The front end of the fire detection system was developed using Next.js, offering a responsive and intuitive user experience. This interface allows users to monitor real-time footage from the drone and view relevant data on detected fires. For the core fire detection functionality, we implemented a custom YOLOv8 model. The model was fine-tuned extensively, with additional training on a specialised dataset to increase its precision in identifying fires in various environments, including dense forests and open fields. On the backend, we utilised Flask, which was responsible for managing the data flow between the drone and the detection model. The drone continuously captures video footage and sensor data, such as temperature and humidity, which is stored in the Flask-based system. This data is processed by the YOLOv8 model hosted on the backend, allowing it to analyse the incoming footage in real-time. Once a fire is detected, the system flags the footage, records critical information like the time and location of the fire, and sends alerts to the web app. The integration between the frontend and backend ensures that users are promptly notified of any potential fire hazards, while the backend efficiently handles large volumes of data from the drone, making the system scalable and robust. This project demonstrates the use of machine learning and drone technology to create an automated fire detection solution aimed at preventing large-scale forest fires.

Accomplishments that we're proud of

Here’s an expanded and edited version: Some of our most significant accomplishments throughout this project include successfully implementing the computer vision system using a YOLOv8 model built with PyTorch and developing a fully functional, user-friendly front-end interface. One of the key technical achievements was fine-tuning the YOLO model to work seamlessly with the drone's hardware, enabling real-time fire detection with high accuracy. Leveraging PyTorch for the model provided us with flexibility in training and optimising the model for better performance in diverse environments, such as dense forests or open landscapes. We also integrated the model with the drone's sensors through a robust backend powered by Flask, ensuring that the incoming data and video footage from the drone were processed and analyzed efficiently. The model’s ability to detect fires in real-time, paired with sensor data, was a crucial aspect of achieving high accuracy. On the front end, developed using Next.js, we created a smooth and intuitive user interface that allows users to monitor live footage, view sensor readings, and receive instant alerts when a fire is detected. This interface provides an accessible way for users to interact with the detection system and manage critical fire response measures. Overall, our biggest accomplishment is delivering a solution that supports sustainability. By automating fire detection and enabling early intervention, this system has the potential to prevent large-scale forest fires, protect ecosystems, and contribute to environmental conservation.

What we learned

By using OpenCV with a custom-trained YOLO algorithm for fire detection, we gained valuable experience and into real-time video processing and object recognition. We learned how to set up RTMP servers using Nginx to stream video feeds from DJI drones and process them with our fire detection model. This project deepened our understanding of various object recognition algorithms and their performance, particularly how YOLO utilises metrics like Intersection over Union (IoU) to calculate confidence scores. Additionally, we explored the practical applications of computer vision (CV), grasping how it interprets and processes visual data in real time, including challenges related to accuracy and optimization for dynamic environments. This experience provided us with hands-on knowledge of real-time data handling, model deployment, and the intricacies of fine-tuning computer vision systems for specific tasks such as fire detection.

Built With

Share this project:

Updates