Inspiration

As AI-generated media continues to grow in quality and accessibility, so does the potential for misinformation and manipulation. We were inspired to build ReelEyes as a tool for digital accountability, to empower creators, customers, and platforms, with a way to verify whether a video is real, or fake.

What it does

ReelEyes takes user input (videos) and passes the frames though AI detection generator to determine if the video has been mannipulated or not.

How we built it

The input pipeline consists of users uploading a video to a web interface and the video is decomposed into frames using OpenCV. Each frame is passed through a deepfake detection model. Frame-level predictions are aggregated using a weighted scoring system to generate the final verdict.

Challenges we ran into

As we were creating our run.bat file and preparing to debug, we realized that these file types cannot be ran on MacOS systems, where only one member of our team was a MacOS user. Combatting this, we had to research different ways to run the program on their laptop. So, the MacOS user had to create a separate run.sh file instead of run.sh

Accomplishments that we're proud of

As a team we are proud of the debugging and solutions that we came up with as errors occured in our project, as we were on a tight time constraint.

What we learned

We learned deepfake detection techniques from frame-level analysis, deep learning classifiers, and inconsistencies to identify synthetic content. Deepfake detectors often struggle to generalize across datasets of AI generation, making robustness a challenge.

What's next for ReelEyes

We plan to integrate blockchain-based video verification and offer an API that platforms can use to automate content checks.

Built With

Share this project:

Updates