Inspiration I've always been fascinated by lie detection — not clunky polygraph machines, but something that actually watches and listens to you in real time. When I discovered that Gemini 2.5 Flash Live API could process simultaneous audio and video streams with near-zero latency, the idea clicked immediately: what if you could play a game against an AI that's genuinely trying to catch you lying, frame by frame?

The classic party game "Two Truths and a Lie" was the other spark. I wanted to bring that social energy into something playable alone or with a group, powered by a model smart enough to actually probe your story for inconsistencies.

What it does Truth or Li(v)e is a two-mode AI lie detection game:

You Tell a Story — You tell the AI a true or made-up story. It watches your face via live video, listens to your voice, and asks targeted questions before delivering a verdict. AI Tells a Story — The AI tells a story about a random topic (true or false). You and your friends ask questions and vote on whether it's real. Supports up to 5 players. Win and you unlock a shareable achievement card generated entirely on the Canvas API — optionally with your victory selfie.

How we built it The frontend is React + TypeScript + Vite, styled with Tailwind CSS. The core engine is the Gemini 2.5 Flash Live API, which receives a continuous stream of JPEG frames from the webcam alongside 16kHz PCM audio — all encoded in base64 and sent as sendRealtimeInput messages over a persistent WebSocket.

Audio responses from Gemini are queued and scheduled using the Web Audio API with a precise timestamp scheduler to prevent gaps between chunks. A live suspicion meter uses keyword analysis on the AI's transcript to give visual feedback during the interrogation:

$$\text{Suspicion Score} = \text{base} + \sum_{i} w_i \cdot \mathbf{1}[\text{keyword}_i \in \text{transcript}]$$

The backend is Express.js with SQLite for game history logging. Deployed on Google Cloud Run with continuous deployment from GitHub via Cloud Build, with the API key served securely at runtime through Secret Manager.

Challenges we ran into Runtime vs. build-time environment variables — Vite bakes env vars into the bundle at build time, but Cloud Run injects secrets at runtime. The fix was moving the key to a server-side /api/config endpoint fetched on load.

Audio scheduling — Playing Gemini's audio chunks naively caused crackling and overlaps. I implemented a queue with a nextStartTime pointer so each chunk is scheduled exactly where the previous one ends.

Getting the AI to deliver a verdict reliably — The model would sometimes keep asking questions instead of concluding. The fix was making the instruction unambiguous: the response after the final question must start with VERDICT: TRUE or VERDICT: FALSE — no preamble allowed.

Micro-expression detection — Sending frames at ~10 FPS gave the model enough temporal resolution to notice changes, but consistency varied. I tuned the suspicion meter to weight specific linguistic signals rather than relying solely on the model's confidence.

Accomplishments that we're proud of A fully working real-time multimodal lie detector that genuinely notices when you look away or your voice shifts Multiplayer support with per-player question turns, all coordinated through a single Live API session A client-side achievement card generator using Canvas API — no image generation API needed Clean deployment pipeline from GitHub push to live URL with zero manual steps What we learned The Gemini Live API's multimodal capability is genuinely powerful — simultaneous video + audio analysis in a single session opens up interaction patterns that simply weren't possible before Prompt engineering for game mechanics requires strict format constraints. Vague instructions like "give your verdict next" fail; explicit constraints like "your response MUST start with VERDICT:" work reliably Cloud Run + buildpacks is a clean deployment story once the runtime vs. build-time environment variable distinction is properly understood What's next for Truth or Li(v)e A leaderboard tracking who has fooled the AI the most Harder difficulty modes where the AI is more aggressive and asks follow-up questions mid-story Mobile support with optimized camera framing A "practice mode" where the AI gives you feedback on how to lie better — coaching your delivery, pacing, and eye contact

Built With

Share this project:

Updates