Inspiration
The inspiration for AI Guard came from observing how deepfake videos spread across social media with very little understanding from viewers.
In many cases, people could not confidently tell whether a video was real or fake. Discussions quickly turned into arguments, speculation, and misinformation—without any clear explanation of why the media looked suspicious.
The real problem was not just deepfakes, but the lack of transparent understanding.
When this hackathon provided access to Gemini 3 Pro, it became possible to build a responsible solution focused on explanation rather than overconfident predictions.
What it does
AI Guard is a risk-based forensic media analysis tool.
Users can upload a video (primary mode), upload an image (experimental), or provide a public media URL. During analysis, the original media remains visible so users can clearly see what is being examined.
Instead of labeling content as “real” or “fake,” AI Guard performs multi-stage forensic reasoning and returns:
• A calibrated probability score • A clear, human-readable explanation • An anomaly timeline showing how risk evolves over time • Forensic markers highlighting specific inconsistencies
The goal is to help users understand why media may be suspicious, not to claim absolute truth.
How we built it
AI Guard is built as a modern web application using NextJs, React, TypeScript, Tailwind CSS, and Framer Motion for a high-end forensic UI.
Gemini 3 Pro is used as the core multimodal reasoning engine. It analyzes uploaded media using a structured, multi-stage forensic approach and returns explainable JSON outputs.
The application is frontend-first, requires no paid APIs, and avoids unsafe scraping or platform violations. Local storage is used only for scan history to keep the system lightweight and demo-ready.
How Gemini 3 Pro is used
Gemini 3 Pro is the core reasoning engine behind AI Guard.
It performs structured, multi-stage forensic reasoning across media:
Stage 1 — Frame & visual consistency (lighting, texture, facial boundaries) Stage 2 — Temporal consistency across frames (motion, expressions, jitter) Stage 3 — Audio–visual alignment (lip sync and timing, if audio is present) Stage 4 — Signal quality and limitations (resolution, compression, clip length)
Gemini returns a structured JSON response containing a probability score, forensic markers, and an executive explanation.
This allows AI Guard to provide explainable, risk-based analysis instead of black-box predictions.
Challenges we ran into
One of the main challenges was avoiding overconfident or misleading outputs.
We had to carefully design the system so that Gemini focuses on forensic signals rather than semantic meaning, avoids binary conclusions, and clearly communicates uncertainty.
Balancing explainability, accuracy, and responsible AI behavior was the most important technical and design challenge.
Accomplishments that we're proud of
We’re proud of building an explainable deepfake analysis system that prioritizes transparency over certainty.
AI Guard demonstrates how Gemini 3 Pro can be used responsibly for real-world problems, combining strong reasoning, clear UI, and ethical design principles in a demo-ready product.
What we learned
Building AI Guard taught us that explainability is more important than confidence when dealing with sensitive AI problems like deepfakes.
We learned that high probability scores alone are misleading without context. Users trust systems more when they can see what the AI is analyzing and understand why a conclusion was reached. Keeping the original media visible during analysis and pairing it with timelines and forensic markers significantly improved transparency.
From a technical perspective, we learned how to design structured prompts that guide Gemini 3 Pro through multi-stage reasoning instead of surface-level pattern matching. Small prompt changes had a major impact on output stability, calibration, and bias reduction.
We also learned the importance of handling uncertainty responsibly. For example, audio content and spoken claims can strongly bias models if not explicitly controlled. Designing safeguards to ignore semantic statements and focus on signal-level artifacts was a key learning.
Finally, we learned that UI and UX are not just presentation layers—they directly affect how users interpret AI results. A calm, forensic interface communicates uncertainty far better than aggressive “real vs fake” labels.
Overall, this project helped us better understand how to build AI systems that are not just powerful, but also trustworthy, explainable, and ethically designed.
What's next for AI Guard
Future plans include deeper temporal analysis, educational modes for users, and optional tools for journalists and fact-checkers.
AI Guard is designed to evolve responsibly as synthetic media becomes more advanced.
Built With
- css3
- framermotion
- googlegemini3pro
- html5
- localstorage
- lucidereact
- nextjs
- react
- recharts
- tailwindcss
- typescript
Log in or sign up for Devpost to join the conversation.