MediaShield -AI Generated & Deepfake Image Detection Platform
Inspiration
The rise of AI‑generated images and deepfakes has made it increasingly difficult to trust what we see online. Tools like Bing Image Creator, DALL·E, Midjourney, and Stable Diffusion can create hyper‑realistic faces that look completely real, while deepfake technology enables face swaps and manipulated media that can be used for misinformation or identity fraud. I wanted to build something simple, fast, and reliable that helps people verify whether an image is real or AI‑generated. That idea became MediaShield, a tool designed to help users protect themselves from digital deception.
What I Learned
Working on MediaShield taught me how challenging modern image forensics really is. I learned that GAN detectors cannot detect diffusion images, many AI models don’t return numeric scores, and deepfake detection is a completely different problem from AI‑generated detection. I also learned how to test and validate different Replicate models, handle file uploads, build secure API routes, and design a clean UI. This project strengthened my skills in Next.js, TailwindCSS, external ML APIs, debugging backend issues, and building user‑friendly interfaces.
How I Built It
I started by adding secure authentication using Auth0 so that only verified users can access the detection dashboard. This required configuring Auth0, handling callbacks, protecting routes, and managing sessions in the Next.js App Router. For the frontend, I built a simple interface where users can upload an image and instantly see the AI‑generated score, deepfake score, and final verdict. On the backend, I used two Replicate models: FaceForensics++ for deepfake detection and tstramer/ai-image-detector for AI‑generated image detection. I combined both outputs using a clear decision system that classifies images as Real, AI‑generated, Deepfake, or Uncertain.
Challenges Faced:
Integrating Auth0 was challenging because it required correct callback URLs, environment variables, and route protection. Another major challenge was that early detectors always returned “Real” for AI images because they only worked on GANs, not diffusion models. Some Replicate models returned no score at all, which caused the UI to show 0.0% for everything. I also had to design verdict logic that made sense and refine the UI multiple times to make the results clear, colour-coded, and easy to understand.
Future Development
The next step for MediaShield is improving accuracy by combining more advanced forensic models into a smarter ensemble. Since diffusion‑based AI images are extremely hard to analyze, mismatched results can still happen, and this opens the door for deeper research. In the future, MediaShield can expand beyond images to include video deepfake detection, AI‑generated voice detection, and document/text authenticity checks. Adding metadata forensics, a browser extension, and a public API would help turn MediaShield into a complete digital verification platform for everyday users, journalists, and security teams.
Built With
- ai-image-editor
- auth0
- github
- javascript
- next.js14(approute)
- replicateapi
- tailwindcss
- typescript
Log in or sign up for Devpost to join the conversation.