Inspiration
Deepfakes and edited media are getting good enough that students can’t reliably tell what’s real especially when a clip is low quality, reposted many times, or taken out of context. We wanted a tool that doesn’t just say “fake/real,” but shows evidence and teaches people how to verify before sharing.
What it does
DeepFake Check is a student-friendly media integrity checker:
- Upload an image (and optionally a short video clip in future versions)
- Get a Deepfake / Manipulation Risk score (0–100)
- See an explainability heatmap showing regions that look suspicious
- Read reason cards explaining common manipulation cues (artifact patterns, blending, texture inconsistencies, etc.)
- Follow a verification checklist (source check, reverse image search, context checks) It’s designed to be educational and privacy-aware: we flag content risk, not people.
How we built it
We built a simple pipeline:
- Preprocess uploaded media (resize, normalize, optional frame sampling for video)
- Run an AI detector to estimate manipulation likelihood
- Generate a visual explanation (Grad-CAM / saliency heatmap)
- Convert model signals into human-readable reasons + next-step guidance
- Present results in a clean UI (score + heatmap + explanations)
Challenges we ran into
- Compression & low resolution can create artifacts that look like deepfakes.
- Avoiding overconfidence: we tuned messaging so uncertain cases are labeled carefully.
- Making explanations understandable for non-technical users
Accomplishments that we're proud of
We treat detection as a binary classification problem:
[
p = \sigma(f_\theta(x))
]
where (x) is the input media, (f_\theta) is the model, and (p) is the predicted probability of manipulation.
The risk score is reported as:
[
\text{Risk} = 100 \times p
]
For explainability, we generate a class-activation map to visualize which regions contributed most to the prediction.
What we learned
Explainability matters: users trust the result more when they can see where the model is looking.
- UX is part of safety: clear “verify” guidance reduces misuse and false certainty.
- Even small improvements in clarity can drastically improve real-world usefulness.
What's next for DeepFake Check: Media Integrity Tool
Frame-by-frame video analysis + stability score across frames
- Better robustness to compression and poor lighting
- “Classroom mode” for media-literacy lessons and demos
- Optional metadata and tamper-evidence signals to complement AI predictions
Log in or sign up for Devpost to join the conversation.