Inspiration

Our Team mate Hannah has recently come back from a trip to South Africa, where she was fortunate enough to visit Kruger National Park, one of the best safari parks in the world. Whilst there she observed that the best way to know where the animals in the park were, was via a whatsapp group! Hannah thought there had to be a better way of using the locations embedded in images to confirm sightings and locations of the Big 5.

In addition, Hannah has recently been working with scientists that tracks animals around the serengeti using camera traps and citizen scientists to identify the animals. This is a long process that requires thousands of hours of researchers time and volunteers time to create and process the data. We think there is a quicker way of doing this.

What it does

Map My Game gathers all the images taken within a game reserve, using the geolocation boundaries of the reserves. Passes the images through ImageNet (a state of the art ML image recognition model) and outputs a label of what is actually in the image. This was an important step as often the tags and captions are broad, pretentious and misleading and so they could not be reliably used to inform labels. Plots points on a map, showing the location of the picture, a preview and a label of the animal. Allows a user to check boxes to show only the locations of specific animals in the reserve.

How we built it

We set a bounding box for the extents of the Kruger national park. Used this mask with the Flickr API to find all of the images taken in the park over a set period of time. This then created a database with the image location, URL and title. We passed the images through a pre trained model of inception V3 using Keras with tensorflow backend in python and assigned the label to each case in the database. The locations were then plotted on a map using leaflet, after transforming the image recognition results by grouping similar animals together and filtering images where our model had estimated a low probability of correctness.

Challenges we ran into

Instagram!

We initially set out to collect the images from instagram, as this is the best source of uptodate geotagged images around on the web. It turns out that Instagram had a photo map feature that it killed in mid 2016. So we realised that we were trying to reverse engineer that but just for the mapping of animals. Their API made itself particularly difficult when trying to get the images for this purpose, there is a 10 day turnaround time for access to images for this purpose. This meant we had to change our source of images, to flickr, the next best solution. There are not as many images available and they are less current, but they never the less allow us to showcase the use of this site, when instagram allows us access we will simply have a bigger set of images to pool from.

Giraffes = arabian camels

In validating the imagenet predictions, we encountered a few funny quirks. Namely the pretty much categorical failure of the model to identify Giraffes! The most common misidentification in the model, confused giraffes and arabian camels. We think this is because of the lack of examples of “non-whole” giraffes in our data set. By this we mean that our images of giraffes do not contain the full body of a giraffe, instead only the neck or head of the giraffes were present.

What we learned

The google inception model on ImageNet data is amazing and has excellent accuracy. Instagram has a very restrictive API Conner is great at all things Learned to use leaflet

What's next for Kruger National Park Animal Map

Get approval for full use of the Instagram API to get a larger dataset of images for use on the app Create a way to submit your own photos direct to map Block images of Rhinos to prevent poaching Transferable to other animals parks and other classifiable things.

Built With

Share this project:

Updates