Inspiration

With the rapid rise of generative AI tools like DALL·E, Midjourney, and Stable Diffusion, the line between real and synthetic media has blurred. Deepfakes and AI-generated visuals can now convincingly replicate human faces, voices, and entire scenes, posing serious risks to privacy, journalism, and online trust. From manipulated political clips to AI-driven scams, the inability to distinguish authentic content from fabrications has fueled record-high fraud losses exceeding $12.5 billion in 2024, threatening the very foundation of digital credibility.

TrueView was built to restore that trust. It enables anyone to verify the authenticity of photos and videos within seconds, not just by assigning confidence scores, but by visually and intuitively showing why something appears AI-generated. By highlighting subtle cues like unnatural textures, smooth motion, or inconsistent edges, TrueView helps people understand how AI media differs from the real thing. Our goal is not only to detect deception but to educate users, empowering them to recognize signs of manipulation and navigate the digital world with greater awareness and confidence.

What it does

TrueView analyzes uploaded images or videos through a streamlined pipeline. The media is first sent to the "AIorNot" API for an initial AI or deepfake confidence score, then processed by our in-house MediaAnalyzer using OpenCV to extract key visual metrics. These results are passed to the ExplainabilityEngine, powered by Google’s Gemini API, which translates technical data into clear, human-readable insights. Within seconds, users receive a confidence score, verdict and short explanations of what visual cues led to that conclusion.

By combining external detection with transparent, explainable analysis, TrueView goes beyond black-box classification. It helps users see what makes media suspicious, from overly smooth textures, to unnatural motion, empowering them to understand and identify AI-generated content with confidence.

How we built it

Detection Pipeline:

  1. File Upload: User uploads an image or video through the TrueView Homepage

  2. Primary Detection: The backend sends the file to the AIorNot API for initial AI/deepfake detection

  3. Computer Vision Analysis: Our MediaAnalyzer class processes the media using OpenCV to extract visual metrics

  4. Overall Explanation Generation: The ExplainabilityEngine uses these metrics and calls the Gemini API to generate an analysis summary.

  5. Response: The verdict is then displayed on the dashboard along with confidence scores, overall and metic specific reasoning.

Challenges we ran into

Finding a dependable and highly accurate AI that was cheap enough for us to test our code with. This was a surprisingly niche model to have an API for, there were lack of implementations and the implementations that existed were expensive. The "AI or not" API did not provide us with attributes it used to come up with its verdict, it was a black box. We implemented a separate image analysis algorithm using OpenCV. Despite this, we still had to implement a way to present this data to users with little technical knowledge while breaking down what the metrics referred to. We used the Gemini model to implement this.

Accomplishments that we're proud of

Built a fully functional AI web app within hackathon time. Developed interpretable AI detection - not just results, but reasons. Designed a futuristic, responsive React dashboard for real-time analysis. Successfully connected frontend and backend pipelines for seamless processing. Balanced speed, accuracy, and transparency, making deepfake detection accessible to everyone.

What we learned

Building TrueView taught us that trust in AI detection depends as much on explainability as on accuracy. Early versions that relied solely on detection APIs felt incomplete as users wanted to understand why something was classified as AI-generated, not just see a percentage. Developing our own OpenCV-based analysis made us realize how measurable patterns like texture variance or edge density could reveal hidden clues about authenticity.

We also learned the importance of translating technical data into language people can actually understand. Integrating Google’s Gemini API allowed us to bridge that gap, turning complex metrics into clear, human explanations. Ultimately, we discovered that detection isn’t just a technical problem it’s an educational one. TrueView doesn’t just identify AI-generated media; it helps users recognize and understand the visual fingerprints of synthetic content for themselves.

What's next for TrueView

  1. Browser Extension Develop a Chrome/Firefox extension that allows users to right-click any image or video on the web and analyze it instantly.

  2. Advanced Metrics Incorporate additional detection methods:

Frequency domain analysis (FFT) EXIF metadata examination Compression artifact analysis Face landmark consistency (for deepfakes)

  1. Mobile App Develop native iOS and Android applications for on-the-go media verification.

Built With

Share this project:

Updates