Inspiration

We live in a moment where AI-generated media and fast-moving cyber threats make it hard to know what’s real. For our first hackathon, we wanted to build something practical that helps people verify what they see and hear—so we set out to make deepfake checks simple and fast.

What it does

We have crafted an extension to help users test the site they are visiting. For the first beta version, it will test the images on the said site and provide users with a confidence level for the same. This will give the users a sense of security on whichever site they choose to visit. Our program uses Hugging Face models for image deepfake detection and audio spoof detection, plus an optional ViT classifier to describe the image. It shows a human-readable summary (Likely Deepfake / Likely Real / Inconclusive) and the raw JSON if you want details.

How we built it

Backend: Python + Flask, Pillow, Requests, Flask-CORS; Hugging Face Inference API (token auth) for the detectors; optional local ViT via transformers. Frontend: HTML + Bootstrap 5 + vanilla JS; drag-and-drop upload, progress states, pretty score bars, and a health check. Integration: Serve the frontend from Flask (same origin) to avoid CORS issues; environment variables via .env for secrets. Dev tooling: GitHub branches/PRs; curl for endpoint testing; ngrok for quick sharing.

Challenges we ran into

As we had a little experience with GitHub, we struggled to manage and handle any problems thrown at us by Git. We also struggled with cross-platform issues as some of our laptops bugged while the others didn't, especially since one of us has a Mac. We also had several issues with 401/404s from Hugging Face (bad token/model slug), content-type issues, and 503 warm-ups.

Accomplishments that we’re proud of

For our first Hackathon, we had not planned anything major, but we are glad to have built a working end-to-end prototype. Created a clean, user-friendly UI that converts model scores into a simple verdict.Implemented robust error messages with actionable hints (e.g., token missing, model loading issues, bad slug). At the end of it all, we learned a lot together and avoided all roadblocks together as a team.

What we learned

Here are a few things we learnt as a team, but also on an individual level during this:

  • Practical Git workflows (branches, rebases, PRs, resolving push conflicts).
  • Managing env vars/secrets, API auth, and handling HTTP status codes meaningfully.
  • Picking and integrating ML models; choosing defaults that “just work.”
  • Serving a SPA from Flask, avoiding CORS, and designing usable result summaries.

What’s next for ByteMe

ByteMe plans to improve its current algorithm and also include a machine learning aspect to the same, increasing the potential to learn from every failure we face. We plan to include video deepfake detection and batch processing. We also aim to better model/ensemble calibration, and clarify explanations on “what looked fake”. For our ongoing goal of safety, we intend to introduce privacy-first processing (on-device options), plus security hardening and rate limiting. In the very long run, ByteMe will integrate our services in real-time on cameras and devices to prevent anything from being captured as fake on our users' most easily accessible device, their phones!

Share this project:

Updates