Inspiration

We wanted to study point tracking technology using animal videos. Specifically, we wanted to study how the computer follows the movement of animals and how it identifies animal objects.

What it does

FaunaFlow leverages point tracking technology to analyze animal movement in video footage. By identifying and following specific points on an animal's body, the project allows for detailed observation of motion patterns, behaviors, and interactions. Using algorithms, the system distinguishes and tracks individual animals across frames, providing insights that are useful for studying animal behavior, movement dynamics, and social interactions.

How we built it

The initial step involved searching for animal videos featuring clear movement captured by a stationary camera. These videos were then broken down into individual image frames. Using the first frame from each video, masks were created to isolate individual animals. These masks were generated using SAM (Segment Anything Model), which can create precise masks from just a single point selected within the animal's body. In cases where multiple animals were present, their individual masks were combined into a single mask.

The process begins by loading the first frame and overlaying it with the corresponding mask(s) to ensure precise pixel alignment. We utilized the CoTracker model as our foundational software for point tracking. The videos, converted into PyTorch tensors, are fed into the model alongside their masks. The final output produces a colorful visualization that effectively tracks and displays the animals' movements throughout the video.

Challenges we ran into

The biggest issue that we faced was GPU storage limit on Google Colab. We were not able to run code for too long without running into this problem, and had to find alternate methods (like using different emails/accounts) to get the site working. Additionally, once we were loading personalized videos/images into the CoTracker demo, we faced issues regarding the frames per second, which led to us adjusting the demo for it to accommodate our files.

Accomplishments that we're proud of

We successfully implemented CoTracker and the Segment Anything Model (SAM) to achieve precise point tracking and segmentation of animals in video footage. By integrating CoTracker, we were able to accurately follow key points on animals across frames, enabling detailed analysis of movement patterns.

The SAM model further enhanced our approach by isolating and segmenting animals from their environments, allowing for clear identification and tracking of individual subjects. Together, these models facilitated a deeper understanding of animal behavior and optimized our tracking methodology.

What we learned

Research is fundamentally about persistence through trial and error - problems inevitably arise throughout the process. What began as a straightforward vision required navigating through various technical challenges. Getting the models to work properly involved careful troubleshooting, whether it was ensuring proper mask alignment, dealing with video frame conversions, or fine-tuning the tracking parameters. We learned that successfully implementing computer vision techniques requires attention to detail at each step, from selecting appropriate source videos to properly preprocessing the data before feeding it into the models.

What's next for FaunaFlow: Building Animal Observer via Point-tracking

As for next steps, we are hoping to utilize the integrated SAM and CoTracker tool to observe animal movement and develop conclusions about their behavior. For example, we would analyze factors like speed, social behavior, and irregularities to determine how animals interact with each other, humans, and the habitat in which they reside. We would likely use a larger database of animals so that we can draw connections between the point tracker results and come to conclusions about how the animals respond to certain stimulations in their environment.

Built With

  • colab
  • cotracker
  • python
  • segmentanything
Share this project:

Updates