Inspiration
In an era where "seeing is no longer believing," the rapid proliferation of deepfakes has created a crisis of digital trust. We noticed that high-quality synthetic media is being used to manipulate public opinion, particularly through fake videos of celebrities and politicians that go viral before they can be debunked.
We were inspired to build VERITAS.AI to give the power of verification back to the people. Our goal wasn't just to build another detector, but a comprehensive "truth engine" that evolves. By combining the multimodal reasoning of Gemini 3 with specialized Deep Neural Networks (DNNs), we wanted to create a system that doesn't just say "this is fake," but provides a transparent, data-backed confidence score and tracks the content back to its origin.
What it does
VERITAS.AI is an advanced deepfake detection platform that identifies manipulated images and videos with high precision. Users can interact with the system by either uploading a file directly or pasting a URL for a remote scan. Features of VERITAS.AI Dual-Input Scanning: Supports direct uploads and real-time URL analysis to capture "in-the-wild" forgeries.
Deep Forensic Analysis: Powered by deep neural networks trained on world-class datasets, including DFDC, HuggingFace, and Forensics++.
Confidence Scoring: Instead of a simple yes/no, the model provides a granular confidence score, showing exactly how certain the AI is about the media's authenticity.
Source Provenance: The system attempts to trace the media's digital footprint to identify where it was first uploaded, helping users understand the context of the leak.
High-Profile Protection: Includes specialized training to detect deepfakes of famous celebrities and politicians, acting as an early warning system for viral misinformation.
Continuous Evolution: Designed to learn and update from new data over time, ensuring it stays ahead of evolving GAN (Generative Adversarial Network) and diffusion-based manipulation techniques.
How we built it
To give the judges a peek under the hood, you can use this:
The Brain: Developed using Google AI Studio for high-level reasoning and orchestration.
The Engine: Built with a hybrid of PyTorch and TensorFlow to leverage the best of both deep learning libraries.
The Vision: Integrated OpenCV for frame extraction and pixel-level preprocessing to spot inconsistencies humans can't see.
Data Foundation: Trained on a massive corpus of data, including the Facebook Deepfake Detection Challenge (DFDC) and various HuggingFace repositories.
Challenges we ran into
1. The "Cat-and-Mouse" Generalization Gap One of our biggest hurdles was "model drift." We found that a model trained perfectly on the DFDC dataset would sometimes struggle with "in-the-wild" videos from social media. This is because real-world platforms use heavy compression (like WhatsApp or Instagram), which can strip away the subtle pixel-level artifacts our PyTorch layers were looking for. We had to implement data augmentation—purposefully blurring and compressing our training data—to make VERITAS.AI robust enough for real-world URLs.
2. Tracing the "Digital Ghost" (Source Provenance) Tracing a video back to its original source is notoriously difficult. When a video is re-uploaded, its metadata is often wiped. We initially struggled with how to track provenance without a central database of every video on the internet. We solved this by using Gemini 3’s advanced reasoning to analyze unique background details and environmental markers, cross-referencing them with known historical uploads to find the earliest possible "fingerprint" of the media.
3. Integration Complexity: The Hybrid Stack Combining TensorFlow (for our core DNN) with PyTorch (for specific feature extraction) and Google AI Studio created a complex pipeline. Managing the hand-off between these frameworks without high latency was a challenge. We had to optimize our OpenCV preprocessing to ensure that video frames were being fed into the models at a speed that allowed for near-real-time confidence scoring.
4. Balancing False Positives In deepfake detection, a "False Positive" (calling a real video fake) can be just as damaging as a "False Negative." We spent significant time fine-tuning our thresholds to ensure that low-quality, grainy footage of politicians—which often looks "glitchy" naturally—wasn't incorrectly flagged as a deepfake.
Accomplishments that we're proud of
1. The "Celebrity Shield" Specialized Module We successfully developed a targeted detection layer specifically for high-profile celebrities and politicians. By fine-tuning our model on specific facial markers of frequently targeted public figures, we created an "early warning system" that can identify viral deepfakes before they cause widespread misinformation.
2. Robust Multi-Dataset Fusion It is rare for a model to generalize across different datasets. We are incredibly proud of how we integrated diverse training data from DFDC, HuggingFace, and Forensics++. This fusion allowed VERITAS.AI to move beyond "lab-only" detection and handle real-world variations in lighting, skin tone, and camera quality.
3. Meaningful Confidence Scoring We moved away from binary "Real/Fake" outcomes. We successfully implemented a Confidence Score system that provides a percentage based on deep neural network weightings. This makes the AI's decision-making process transparent, giving users a nuanced understanding of why a piece of media might be suspicious.
4. Hybrid Tech Stack Integration We managed to orchestrate a complex pipeline involving Google AI Studio for high-level reasoning, PyTorch and TensorFlow for model architecture, and OpenCV for real-time video processing. Getting these different libraries to work together in a single, streamlined URL-scan system was a major technical victory.
5. Source Traceability Despite the challenges of metadata stripping, we built a successful prototype for Source Provenance. Being able to give users a lead on where a video was first uploaded provides the critical context needed to fight "fake news" at its root.
What we learned
>The Complexity of "Fake": We learned that deepfakes aren't just face-swaps anymore; manipulation now involves voice cloning and "cheapfakes" (contextual lies), which requires a multimodal approach like Gemini 3.
>Preprocessing is Key: No matter how good the neural network is, the quality of the OpenCV frame extraction and noise reduction determines the final accuracy.
>The Ethical Responsibility: Working on VERITAS.AI taught us that as AI creators, we have a responsibility to build tools that protect digital integrity as fast as others build tools to disrupt it.
What's next for VERITAS.AI
Browser Extension: We plan to launch a Chrome extension that automatically flags deepfakes on social media feeds in real-time.
Audio-Visual Sync Analysis: Adding a layer to detect mismatches between audio waveforms and lip movements (lip-sync forgery).
Decentralized Truth Ledger: Exploring the use of blockchain to create a "Verified by Veritas" badge for original content creators to protect their authentic work.
Log in or sign up for Devpost to join the conversation.