We Won Best AR/VR Hack Award!

Best AR/VR Hack Announcement

Try It Yourself

Want to try out EndangARed Defender yourself? Check it out at https://andrewdimmer.github.io/endangared-defender/. Note: Due to time restrains, we currently only support AR on iOS 11 or higher. Also, we recommend using smaller sized images on mobile devices, as the TensorFlow model has a high CPU usage.

Inspiration

Human activity on Earth is putting dozens of animal species in imminent danger of extinction. Whether it’s because of poaching, or loss of habitats caused by climate change, pollution, and human encroachment, many species’ very survival hangs in the balance.

The good news about these human-caused problems is that they have the potential to be human-solved before it’s too late.

Unfortunately, one-size-fits-all universal solutions aren’t effective because of the huge range of different animals and habitats. A solution to help critically endangered Sumatran elephants, for example, probably wouldn’t help Hawksbill sea turtles. Therefore, the very first step in trying to save an endangered species is finding out how many animals of that species are left, and where they’re currently living. This is a hugely time-consuming and massively labor-intensive process, which requires enormous amounts of money and great numbers of highly organized, highly trained conservationists to locate, track, and monitor the animals.

And that’s exactly why we built EndangARed Defender. Our web app combines the nearly unlimited power of crowdsourcing and AI object detection to easily and inexpensively locate, track, and monitor endangered species. This has the double benefit of freeing specialized wildlife protection organizations to focus more of their time, money, and resources on specific conservation efforts, while simultaneously increasing public awareness of endangered species.

What it does

EndangAR Defender allows civilian volunteers, conservationists, and tourists to help track the range and population size of endangered animals without needing specialized training. All they need to do is take pictures of target species in the wild, and upload the pictures to our web app. Then a machine learning object detection model identifies and logs the animal sightings based on what was in each picture. We do this by pulling the geotagging information from the picture, then display the sightings over time to the user via Google Maps.

We also help provide users with information about the animals that they have photographed, as well as provide information about how they can get involved in that animal’s preservation. This includes providing the current endangered status, a 3D model so users can see it up close, and links to organizations that help conserve that animal.

How we built it

We started by building our machine learning model in Google Auto ML Vision. To do this, we first collected as many different images of the sample animals as we could. Then, we wrote python scripts to handle things like bulk renaming and CSV generation, before labeling each image and training the model. In particular, we used an object detection model so we can identify, count, and track multiple animals (of the same or different types) all in one image.

From there, we built a web app to allow users to upload photos and view information about the animals in the photos. We also connected Google Maps to display information about where that animal has been sighted recently. This allows us to track the range and estimate the size of the population over time as users upload more images.

Finally, we used echoAR to upload, host, and display 3D models of the animal(s) identified in the picture so that users can see them up close, even from the comfort of their own homes.

Challenges we ran into

It took us forever to figure out how to access file properties after the files were uploaded. Eventually, we figured out that the best way to handle this was to not do the tag processing on the front end, but rather to send it to a server where we could use a better FileReader to access that information.

Accomplishments that we're proud of

We were really happy with the accuracy with which our machine learning model works. In addition, we’re really happy that we completed all of the main features that we wanted to make over the course of the hackathon.

What we learned

We learned a lot about file tags and how we can access file properties like the date taken and any attached geotagging data. We also learned more about each of the different types of image classification/detection models, and when to use each one.

What's next for EndangARed Defender

We’d like to expand the machine learning model to include more species of animals. We’d also like to see if we can integrate with the Image-Based Ecological Information System (IBEIS) project which analyzes animal photos scraped from internet sources such as Flickr and Facebook and applies computer vision and active learning methods to detect the animals, identify the species, and even identify individual animals. Their AI techniques can identify unique animals as long as they have stripes, wrinkles, or other unique textures. We’d also like to add some other aspects to the machine learning model to see if we can identify items in the picture that might indicate the presence of hunters, farmers, or poachers. We’d also like to add features to the tracking system to see if the growth of human settlements has influenced the habitat, as we can track the range over time.

Built With

Share this project:

Updates