Inspiration

Misinformation spreads faster than ever through edited and AI-generated images, but most people do not have access to simple tools that help them understand whether an image may be authentic or suspicious. Existing forensic tools are often too technical, fragmented, or difficult for everyday users to interpret.

TruthTrace AI was inspired by this gap. I wanted to build a tool that makes image forensics more accessible, visual, and explainable. Instead of giving only a black-box result, the project shows multiple forensic signals such as Error Level Analysis (ELA), noise inconsistency patterns, metadata checks, frequency spectrum clues, and suspicious-region highlighting. The goal was to turn complex forensic analysis into something understandable for students, journalists, researchers, and ordinary users.

What it does

TruthTrace AI is an explainable image authenticity verification system that helps users inspect whether an uploaded image appears authentic, manipulated, or uncertain.

After a user uploads an image, the system performs several forensic checks:

  • Error Level Analysis (ELA) to inspect compression differences and potential edited regions
  • Noise inconsistency analysis to detect unusual pixel-level patterns
  • FFT frequency spectrum analysis to identify suspicious frequency distributions and AI-like artifact clues
  • Metadata inspection to check for camera metadata or missing EXIF information
  • Compression anomaly analysis to detect signs of unusual saving or recompression
  • Suspicious region highlighting to visually mark the area with the strongest anomaly signal

The system then combines these signals into an authenticity score and verdict:

  • Likely Authentic
  • Uncertain
  • Likely Manipulated / AI Generated

Finally, it generates a downloadable forensic PDF report containing the score, forensic signals, explanation notes, and visual evidence.

How I built it

TruthTrace AI was built as a full-stack web application using Next.js, TypeScript, and Tailwind CSS.

Frontend

The frontend provides:

  • a clean upload interface
  • real-time results display
  • evidence cards for each forensic visualization
  • a score and verdict dashboard
  • a downloadable report flow

Backend

The backend was implemented using Next.js API routes, keeping the entire application self-contained without requiring a separate server.

Analysis pipeline

The forensic pipeline was built using server-side image processing techniques:

  • image recompression and difference calculation for ELA
  • grayscale transformation and edge/noise analysis for inconsistency mapping
  • FFT-based frequency spectrum generation
  • EXIF and metadata extraction
  • heuristic-based compression anomaly scoring
  • suspicious-region localization using the highest anomaly area

Report generation

A PDF report generation route was added so that each analysis can be exported into a shareable forensic report. This makes the project feel practical and useful beyond a simple demo.

Challenges I ran into

This project had several technical and practical challenges.

1. Making forensic outputs understandable

Forensic techniques can easily become too technical. One challenge was transforming raw outputs into visuals and explanations that a normal user can understand.

2. ELA visualization issues

At first, the ELA heatmap was too dark or appeared almost fully black for some images because the difference between the original and recompressed image was very small. I had to adjust the visualization so small error values became more visible without changing the underlying forensic logic.

3. PDF generation bugs

Generating a valid downloadable PDF report reliably required debugging stream handling and response formatting, especially to ensure that the report downloaded and opened correctly.

4. Balancing accuracy with hackathon time limits

Image forensics is inherently complex. In a short hackathon window, the challenge was to build something technically meaningful, explainable, and polished without overengineering or depending on heavy external models.

5. AI-generated images are difficult

Modern AI-generated images can be very clean and may not always trigger strong forensic artifacts. This made it important to present the system as an evidence-based forensic assistant rather than a perfect “real vs fake” classifier.

What I learned

This project taught me a lot about:

  • practical image forensics
  • visual explainability in AI/security tools
  • building end-to-end full-stack applications quickly
  • designing around uncertainty instead of forcing overconfident outputs
  • improving technical credibility through user-facing reports and evidence

I also learned that in digital forensics, uncertainty is not failure. A realistic forensic tool should communicate signals, evidence, and confidence rather than pretend every image can be classified perfectly.

Why this project matters

TruthTrace AI addresses a real and growing problem: people increasingly encounter manipulated and AI-generated images, but they often have no accessible way to inspect them.

This project makes forensic reasoning more transparent by combining:

  • visual evidence
  • explainable scoring
  • suspicious-region localization
  • downloadable reports

That makes the tool useful not only as a technical prototype, but also as a step toward more accessible digital trust and media verification.

Future improvements

There are several directions to expand TruthTrace AI further:

  • stronger AI-image detection using learned models
  • video and deepfake frame analysis
  • browser extension for quick image verification
  • newsroom or social-media moderation workflow support
  • side-by-side comparison mode
  • batch verification for multiple images
  • improved forensic confidence estimation

Conclusion

TruthTrace AI is a practical, explainable image authenticity verification tool designed to make digital forensics more accessible. Instead of treating authenticity detection as a black box, it provides users with visual evidence, forensic signals, and a professional report they can inspect and share.

In a world where manipulated and AI-generated media are becoming increasingly common, TruthTrace AI is built to support more informed and transparent image verification.

Built With

Share this project:

Updates