Problems in wildlife encouraged us to take part in the ZooHackathon

What it does

It takes images that are scanned by infrared motion sensors and filters them so that the rangers in wildlife preserves get correct alerts when potential threats are detected(e.g. Humans)

How we built it

We used Python, CSS, HTML, JavaScript, OpenCV, Photoshop

Challenges we ran into

Noise due to various movements in the background Pixels isolation amongst photo layers Differentiation between sharp intensity gradients Differentiation between a human disturbance and animal disturbance

Accomplishments that we're proud of

creating our very own original homemade algorithm for preprocessing being able to actually process images working as a team

What we learned

learning OpenCV learning PIL doing image processing in general

What's next for Eagle-Eye

implementing our project on a larger scale implementing Project with deep learning algorithms

Share this project: