Inspiration

Every day, the internet drops another mind-bending AI video — hyperrealistic cities, impossible camera moves, people who don’t exist. And the wild part? Even creators, editors, and designers we know kept getting fooled. Our group chat basically turned into: “Bro this is real.” “No it’s AI.” “WAIT???” And the scary truth: Half the time, none of us were right.

That’s when the idea hit us. If we can barely tell what’s real anymore, what about everyone else? What if we turned this moment of confusion into a game — a challenge — a showdown between human intuition and AI illusion? We wanted something fast. Fun. Competitive. Something that exposes your blind spots, sharpens your instincts, and shows just how far generative video has evolved. So we built a game. A game where reality and AI blur, your gut gets tested, and every round makes you question your own eyes. Thus: “REAL OR RENDERED?” A playful answer to a very real problem — in a world where the line between real and artificial gets thinner every day.

What it does

Real or Rendered is a perception challenge built around a simple question: can you accurately tell whether a video is real or generated by AI? Players are presented with ten short clips, each randomly selected from our dataset. After each clip, they make a choice—real or AI-generated—and immediately see whether they were correct. The game tracks accuracy, highlights patterns in your choices, and reveals how often your intuition aligns with reality. At the end, players receive a detailed breakdown of their performance and can submit their score to a global leaderboard to see how they compare with others. The result is a streamlined, immersive experience that shows just how convincing synthetic video has become.

How we built it

Frontend: We built the interface using React, TypeScript, and Vite to keep the app fast and responsive. Tailwind CSS provided a clean design foundation, while Framer Motion added cinematic transitions to create a smooth, modern experience.

Backend and Database: Supabase powers our backend, storing all video metadata and handling secure access through Row Level Security. The leaderboard is also managed through Supabase, allowing players to instantly submit and view scores.

Dataset Pipeline: We created a scraping system using Node.js, twitter-scraper, and yt-dlp to collect AI-generated content from platforms like X/Twitter. This automated pipeline downloads, organizes, and prepares video files for the game.

Game Logic: React state tracks user guesses, score progression, and performance metrics. Videos are served randomly from the database using a dedicated API function designed for efficiency and randomness.

Database Schema: Two core tables—videos and leaderboard—support both gameplay and score tracking. A computed accuracy column provides clean, automatic performance analytics.

Challenges we ran into

One major challenge was gathering high-quality AI-generated video clips. Social media scraping comes with rate limits, broken links, and inconsistent media formats, which required careful filtering and handling. Ensuring seamless video playback across browsers was another obstacle. Autoplay restrictions, decoding delays, and file inconsistencies all had to be addressed to avoid interrupting gameplay. Performance was another focus. Videos are large files, so we had to optimize loading states, transitions, and UI responsiveness to create a smooth, uninterrupted game experience.

Accomplishments that we're proud of

We built a fully functional, visually refined game under time pressure. We created an automated pipeline for gathering real and AI-generated videos, designed a scalable backend, and implemented a real leaderboard. We are proud of turning a simple idea into a polished product that people genuinely enjoy using. The final result feels cohesive, fast, and intentionally designed.

What we learned

This project taught us how to build and integrate a complete full-stack system using React, Supabase, and Node.js. We learned how to work with video performance constraints, how to design secure database schemas, and how to smooth out the interactions between frontend and backend. We also deepened our understanding of synthetic media itself. Working closely with AI-generated video content made it clear how quickly these models are advancing and how challenging human perception has become.

What's next for Real Or AI

We plan to expand the dataset, add difficulty levels, and introduce more subtle or advanced generative videos to increase the challenge. Future versions may include audio-based detection, user-uploaded clips for community voting, and a competitive multiplayer mode. In the long term, we envision building tools that help people better understand and navigate a world where artificial content is indistinguishable from the real thing.

Built With

Share this project:

Updates