Inspiration
Provide valuable data to on-premise coordinators just seconds before the firefighters make entry to a building on fire, minimizing the time required to search for and rescue victims. Reports conditions around high-risk areas to alert firefighters for what lies ahead in their path. Increase operational awareness through live, autonomous data collection.
What it does
We are able to control the drone from a remote location, allowing it to take off, fly in patterns, and autonomously navigate through an enclosed area in order to look for dangerous conditions and potential victims, using a proprietary face-detection algorithm. The web interface then relays a live video stream, location, temperature, and humidity data back to the remote user. The drone saves locations of faces detected, and coordinators are able to quickly pinpoint the location of individuals at risk. The firefighters make use of this information in order to quickly diffuse life-threatening conditions with increased awareness of the conditions inside of the affected area.
How we built it
We used a JS and HTML front-end, using Solace's PubSub+ broker in order to relay commands sent from the web UI to the drone with minimal latency. Our AI stack consists of an HaaR cascade that finds AI markers and detects faces using a unique face detection algorithm through OpenCV. In order to find fires, we're looking for areas with the highest light intensity and heat, which instructs the drone to fly near and around the areas of concern. Once a face is found, a picture is taken and telemetry information is relayed back to the remote web console. We have our Solace PubSub+ broker instance running on Google Cloud Platform.
Challenges we ran into
Setting up the live video stream on the Raspberry Pi 4B proved to be an impossible task, as the .h264 raw output from the Raspberry Pi's GPU was impossible to encode into a .mp4 container on the fly. However, when the script was run on Windows, the live video stream, as well as all AI functionality, worked perfectly. We spent a lot of time trying to debug the program on the Raspberry Pi in order to acquire our UDP video live stream, as all ML and AI functionality was inoperational without it. In the end, we somehow got it to work.
Brute forcing into every port of the DJI Tello drone in order to collect serial output took nearly 5 hours and required us to spin up a DigitalOcean instance in order to allow us access to the drone's control surfaces and video data.
Accomplishments that we're proud of
We were really proud to get the autonomous flying of the drone working using facial recognition. It was quite the task to brute force every wifi port on the drone in order to manipulate it the way we wanted it to, so we were super happy to get all the functionality working by the end of the makeathon.
What we learned
You can't use GPS indoors because it's impossible to get a satellite lock.
What's next for FireFly
Commercialization.
Built With
- arduino
- dji-tello
- drone
- flask
- google-cloud
- javascript
- lte
- opencv
- particle-boron-lte
- python
- raspberry-pi
- scikit-learn
- solace
- telus-lte-cat-m1
- web


Log in or sign up for Devpost to join the conversation.