Inspiration
Due to problems with climate change and human population growth increasing numbers of animals are becoming endangered and going extinct. The logistics to track these illusive animals means that the full extent of many species behaviour and movement is unknown hampering conservation efforts. Further from this tracking can be a very focused operation and identifying individuals in a population can take many days for conservationists to identify and track.
It would be ideal to have a quick deployment system to track a large range of animals in the wild in a fully autonomous fashion. The system would be able to track a range of species rather than a single targetted animal and allow for tracking individuals for many years. Being able to view the species in a less disruptive fashion from helicopters, vehicles and similar and have increased manouevrability in densely vegetated areas would allow for the identification, real time tracking and logging of a vast array of endangered wildlife.
What it does and how we built it
We utilise advanced computer vision techniques to perform real-time object detection. We perform fine-tuning on a region-based convolutional neural network pre-trained on the COCO object detection dataset. Our prototype utilises the subset of the PASCAL VOC 2012 dataset consisting of animals, providing a total of 4000 training images and 400 test images. With an RTX 3080 GPU, the model can be trained within 40 minutes and is able to perform real-time inference at 30 frames per second. We utilise techniques such as data augmentation and L2 regularisation to improve model performance on the test set. We also build a python tool for automatically extracting frames from a video feed and performing inference using a trained model. This was primary built using OpenCV and PyTorch.
A separate part of the project utilised MAVlink and a lightweight java drone simulator (jMAVsim) to send a mission to the drone and receive commands through thread queues in python.
Challenges we ran into
We didn't have any drone footage to begin with, so some time was needed to be spent finding suitable datasets and understanding how to use them. Training machine learning models on image data is also very time-consuming so we had to work very efficiently.
Controlling a simulated drone required multiple separate programs to interface together: the simulator, the visualiser and the control program. These all have slight variations in the commands they accept and how they connect with each other requiring downloading specific versions of libraries. Trying to send live commands to the drone during flight proved difficult as setting up the environment left very little time to develop the code.
Accomplishments that we're proud of
The drone simulation can successfully execute a loaded mission without fault. We were able to get the drone working within threads to allow messages to be passed from the user terminal to the drone execution.
What we learned
Learnt about edge computing and MAVlink protocols for automated remote drone control. Also learnt about transfer learning as an efficient way to train ML models off of precedent.
What's next for E.A.S.T
With high accuracy real-time object detection combined with autonomous drones, we envision that we will be able to achieve fully-autonomous tracking.


Log in or sign up for Devpost to join the conversation.