Inspiration

The Red Panda network is an organisation that works to protect and track Red Pandas in the Himalayan mountains. Their current workflow relies heavily on human labour and one identified workflow that I will seek to optimise is the analysis and classification of red panda images . Currently the Red Panda Network manually classifies thousands of images taken by their camera traps to gain insights into which areas pandas frequent to make data driven decisions in the hope of protecting their habitats. I have been tasked with streamlining this workflow by leveraging the Microsoft Azure and the power of AI

What it does

In the proposed workflow, users will first be authenticated using Active directory. Once authenticated, they will be able to view and engage with the heatmap on the main dashboard which shows the locations of all the camera traps they have infield as well as individually spotted pandas. If the user wants to upload images associated with a specific camera trap, they will first be classified using the AI classifier which will then upload the image to blob storage and create an entry in the Azure table for that specific image. The web app is responsive which will also allow forest guardians ( red panda volunteers who patrol the mountains ) to upload images straight from their phone.

How we built it

Frontend :

  • Javascript
  • Bootstrap
  • HTML,CSS
  • Azure Maps

Backend:

  • Python (Flask)

Database:

  • Azure Tables (NoSQL)

Storage:

  • Azure Blob Storage

Web Host:

  • Azure (Azure app service)

CI/CD:

  • Azure DevOps and Pipelines

Challenges we ran into

Initially, to classify user uploaded images I was using the customvision.ai api and making http requests that returned the classification. This worked fine for a few images however during my testing, I decided to upload a batch of images that consisted of 15 images. Each image took around 3 seconds to get classified which meant that the user would be stuck on the loading screen for far too long. The logical next step would be to make asynchronous calls to this api to drastically reduce the wait time. The wait time was indeed reduced however this optimisation exposed another problem about the API and its rate limit - users could make a maximum of 10 calls per second. To try and work around this, I simulated a pause in my code after every 10 asynchronous calls. This seemed like a crude solution whose runtime was still linear.

To improve upon this, I decided that the best way to tackle this problem would be to download the model locally and classify the images on the server. However this brought about its own challenges as I now had to do image preprocessing to ensure that the user uploaded images were compatible with the trained model. To preprocess the images I made use of the “OpenCV” and “Numpy” modules in python which provided a simple api to scale,crop, resize and update orientation of images of any format. I then made asynchronous calls to this local model which yielded excellent results.In a random test of 20 unseen images, the classifier was able to classify them all in 5 seconds!

Accomplishments that we're proud of

I am proud of the final product and the impact that it will have on the end user.

What we learned

I learnt alot about using Azure as well as full stack web development. Another niche skill I developed was image processing using opencv and numpy which will undoubtedly be useful in any form of computer vision tasks I embark on.

What's next for Red Panda Network

The next stage is to actually get the charity to start using the software in their daily workflow which would then uncover other bugs which I will then seek to fix.

Share this project:

Updates