Inspiration

With the rapid rise of AI-generated media, detecting deepfakes has become more critical than ever. We were inspired by the need for trustworthy visual content, whether in journalism, social media, or digital forensics. Our goal was to build a model that doesn't just flag fake content but understands what it's looking at.

What it does

Our model takes an image as input and outputs two predictions: Whether the image is real (1) or fake (0). The class the image belongs to: human_faces, animals, or vehicles. It’s trained to recognize deepfake manipulations and generalize to new, unseen synthetic images—something many models struggle with.

How we built it

We used the Artifact_240K dataset, which includes a mix of real and synthetic images across multiple categories. Our pipeline includes: Data preprocessing and augmentation for better generalization. Transfer learning using models like ResNet and Tensorflow. Validation strategies to avoid overfitting and ensure robust performance.

Challenges we ran into

Preprocessing and model training took too much computational resources as the dataset is huge. Avoiding overfitting on synthetic images from only a few manipulation techniques. Designing a model that simultaneously handles binary classification (real vs fake) and multi-class classification (image category).

Accomplishments that we're proud of

We all worked like a team and coordinated fantastically.

What we learned

How to host both frontend and backend on server on a customized domain where connecting and routing both the end played the integral step.

What's next for Shamrock

To help validate data on a professional level where we can help clients with data correctness.

Built With

Share this project:

Updates