Onsite Dynamic Identification Neural Network


We discovered that camera traps used to alert rangers to the presence of poachers were being triggered by animals so often that the ranger base stations were overwhelmed with images to review. This means that poachers can reach their targets before they can be identified by rangers. Sometimes the images they are caught in may never even be seen.

What it does

The ODINN prototype is tailored to anti-elephant poaching efforts and is loaded directly onto the camera traps. It acts as a form of onsite triage identifying humans and elephants, alerting the rangers only to photos that require their attention, freeing up resources for more important work. Considering the number of times the cameras are accidentally triggered this reduces the number of images that the rangers need to check by 98%. ODINN can be retro-fitted to existing camera traps, meaning there is no outlay for additional infrastructure or equipment. When deployed in the field ODINN works straight away requiring no training time. ODINN also improves overtime as it collects more images.

Because ODINN identifies the locations of both humans and elephants, rangers are able to use their knowledge of tracking, poaching techniques and the local area to plan a coordinated intercept at an appropriate location. This tactical advantage will help protect the rangers as well as getting the rangers to the poachers before the poachers get to the elephants. We believe that it is important to combine high-tech, simple-to-use solutions with human intelligence and good old fashioned police work (You can see an example of how this would work in the map on picture 3).

Unfortunately where there is money there can also be corruption, ODINN helps reduce the role of corruption in illegal wildlife trafficking by automatically tagging time stamped images, making the system significantly more robust to human manipulation.

How we built it

We used 10,000 images from camera traps in the Serengeti/Tanzania. These images were labelled with the content of the image. We took 2000 images of elephants, 3000 images of humans and 5000 images which were either empty or contained other animals. We ran these 10,000 images through a neural network to identify key features of the images before using these features to identify which class (Human, Elephant, Other) the image falls into. If the image is Human or Elephant, an alert is sent to the command center so that the image can be reviewed.

Challenges we ran into

  • We had a lot of data problems to begin with, but we could solve these by working closely with the organisers explaining the issues and potential solutions.
  • Creating the hardware prototype was very challenging in the time period. However, during the course of the Hackathon we created a working model camera trap using the same arduino boards used in the field. The model cost about £30 ($40), weighed less than a kilo and was easily hidden. In practice the algorithm is loaded directly onto the arduino which makes up the camera hardware, allowing onsite triage.

Accomplishments that we're proud of

  • Despite being a prototype ODINN correctly flags 78% of elephants and 93% of humans
  • The effect is that ODINN reduces the amount of images that rangers need to look at by 98%
  • The team made use of the diverse skill sets within the group to create a revolutionary product in a short time frame
  • ODINN can be deployed on camera traps already in the field and requires no investment in new equipment.

What we learned

In projects that involve many different segments, communication between team members is key to a successful product. However, in addition to these project issues, it is important to consider how these technologies interact with the existing methods and practices already in place in the field. We consulted the experts available during the hackathon to learn from their field experience how best to develop ODINN for maximum impact.

What's next for O.D.I.N.N.

  • Increase the training data to improve the accuracy
  • Expand to include a range of species
  • Analysis module that tracks the movement of animals enabling predicting where the animals will be even without a camera trap being triggered.

Built With

Share this project: