Inspiration
We started with a question that made us laugh: what if a courtroom game used real data as evidence?
Most hackathon projects that use datasets treat the data as background — charts that sit on a dashboard page nobody interacts with. We wanted the data to be the actual game mechanic. When we found the Kaggle Pokemon stats dataset, it clicked immediately. Every Pokemon has concrete stats — Speed, Attack, Defense, HP, Type. Those stats are facts. And facts are exactly what a courtroom runs on.
So we built a game where Pokemon witnesses testify, and every one of them is lying about something. The lie is always tied to a real stat. Wobbuffet says he chased Gengar through the Pokemon Center — but his Speed is 33 and Gengar's is 110. That is not an opinion. It is a data contradiction, and the player has to spot it.
The idea felt right because it combined three things we all care about: games that make you think, AI that stays in character, and datasets that actually do something useful.
What it does
PokéCourt is an interactive courtroom mystery game. The player is a defense attorney. Their client — a Pokemon — has been accused of a crime. Three Pokemon witnesses take the stand and testify, but each one is hiding a lie somewhere in their testimony.
The player's job is to listen carefully, press witnesses for more details, ask free-form questions, review the evidence file (which contains the witness's real stats from the Kaggle dataset), and raise an OBJECTION when they spot a statement that contradicts the data.
Catch the contradiction, and the witness breaks. Miss it, and you lose credibility. Three wrong objections and the defendant is convicted.
Key features:
- 3 full cases — The Haunted Heist (Gengar), The Berry Heist (Snorlax), and Grand Prix Sabotage (Alakazam), each with unique witnesses and lies
- AI-driven characters — Judge Slowking, Prosecutor Mr. Mime, and 9 witness Pokemon all powered by Gemini, each with a distinct personality that responds dynamically to player questions
- Data-powered contradictions — Every lie maps directly to a verifiable stat in the Kaggle Pokemon dataset
- 3 difficulty modes — Rookie (obvious Speed/Attack lies), Ace (trickier stat contradictions), and Legendary (subtle type and special stat deception)
- Press streak system — Press a witness 3 times in a row and they start slipping up, giving you bonus clues
- Hint system — Spend 1 HP to get a nudge toward the right stat, adding a risk/reward layer
- Cross-exam timer — 90-second clock per witness keeps the pressure on
- OBJECTION cutscene — Full-screen animated objection moment with sound, because you have to earn that feeling
- IRL learning tips — After each correct objection, the game explains the real-world critical thinking skill you just used (e.g., "Feasibility Analysis — before believing a claim, ask: do the numbers actually make it possible?")
- Contradiction replay card — Shows exactly what the witness claimed vs. what the data says, reinforcing the connection between testimony and evidence
- Data Insight dashboard — Recharts-powered visualizations showing stat distributions, type matchups, and how the dataset drives the game logic
- Pause menu — ESC to pause with volume control, because games should have pause menus
- Post-trial debrief — AI-generated breakdown of what reasoning skills you used and how they apply in real life
How we built it
Frontend: Next.js 14 (App Router) + React + TypeScript + Tailwind CSS. We chose this stack because it let us move fast, share types across the whole codebase, and deploy to Vercel in minutes.
3D Courtroom: Three.js via React Three Fiber for the courtroom environment. The camera shifts dynamically between judge, prosecutor, witness, and defense positions based on game state.
AI Characters: Google Gemini API (gemini-2.5-flash) powers every character. Each has a system prompt defining their personality, their relationship to the truth, and rules for how they respond to pressure. The judge is patient and formal. The prosecutor is theatrical. Each witness has a specific lie they are protecting, and their nervousness escalates as the player gets closer to the contradiction. We also use Gemini to evaluate whether a player's free-text objection reasoning is logically valid.
Data Engine: The Kaggle Pokemon dataset (800+ Pokemon, 13 columns) is loaded server-side. A contradiction engine takes a Pokemon name and difficulty level and returns a crafted lie that is always falsifiable against the real stats.
Game State: A finite state machine manages the trial flow — opening statements, witness testimony, cross-examination, objection resolution, verdict. Each state transition triggers the right dialogue, camera angle, and sound effect.
Team workflow: We split into four workstreams — UI/Design, Data/Dashboard, AI/Gemini, and Full-Stack/Game Logic. We defined TypeScript interface contracts early so everyone could build in parallel against stub data, then integrated at checkpoints.
Sound: All sound effects (gavel, objection sting, typewriter clicks, emotional reactions) are procedurally generated using the Web Audio API — no external audio files needed.
Challenges we ran into
Making AI stay in character without breaking the game. Gemini is powerful but unpredictable. A witness could accidentally confess their lie, spoil the answer, or break the fourth wall. We spent significant time tuning system prompts to make characters feel alive while keeping them on-script. The key insight: give the AI the witness's real stats and their specific lie, then let it improvise around that constraint rather than scripting exact responses.
Designing lies that are both funny and solvable. A lie has to be entertaining ("I chased Gengar across the building!") but also logically catchable by comparing one specific stat. We went through multiple iterations to find lies that felt natural in dialogue but had clear numerical contradictions. The difficulty scaling — from obvious Speed lies to subtle Sp. Def contradictions — took real tuning.
Scope discipline under a 24-hour deadline. We had ideas for multiplayer, mobile UI, custom case generators, and procedural testimony. We cut all of it. One polished gameplay loop with three complete cases, one solid demo, one backup video. Every time someone started building something outside the core loop, we pulled them back.
Integrating four parallel workstreams. Four people building in parallel means four sets of assumptions about how data flows. Defining TypeScript contracts at hour zero was the single best decision we made. When we started wiring modules together at integration checkpoints, everything clicked because the types enforced the interfaces.
Accomplishments that we're proud of
The dataset is the game. It is not decoration. Every objection the player files is a hypothesis test against real data. You cannot win by guessing — you have to read the evidence, compare it to the testimony, and identify the contradiction. That is the core loop, and it works.
The AI characters have genuine personality. Judge Slowking yawns and hits his gavel. Mr. Mime gestures with invisible objects. Wobbuffet gets progressively sweatier as you press him. These are not canned responses — each conversation plays out differently because Gemini generates dialogue in character based on the game context.
The MenoLearn angle is real, not stapled on. After each correct objection, the game teaches you the critical thinking skill you just applied — feasibility analysis, capability assessment, identity verification — with a real-world example. The game does not lecture. It lets you experience the skill first, then names it.
It is actually fun to play. People laugh at the OBJECTION moment. They want to press Wobbuffet one more time to see him sweat. They feel smart when they catch the lie. That emotional loop — curiosity, pressure, discovery, satisfaction — is what makes it a game and not just a tech demo.
What we learned
One clear idea, fully polished, beats five half-built features. We had 24 hours. The teams that tried to build everything shipped nothing. We shipped one thing that works, looks good, and makes people smile.
Define your interfaces before you write your code. The TypeScript contracts we locked at hour zero saved us hours of integration pain. When four people build against the same types, the pieces fit together at merge time.
AI works best when you give it constraints, not freedom. An unconstrained Gemini prompt produces interesting but useless output for a game. A tightly constrained prompt — "you are Wobbuffet, you are hiding a Speed lie, you get nervous when asked about chasing" — produces consistent, playable dialogue every time.
Data-driven gameplay is underexplored. Most projects use datasets for visualization. Using a dataset as a game mechanic — where the player interacts with the data through narrative — creates a completely different kind of engagement. We think there is a lot more to explore here.
What's next for PokéCourt
- More cases and witnesses. The engine supports any Pokemon with stats. We want community-contributed cases where anyone can pick a Pokemon, write a lie, and create a trial.
- Procedural testimony generation. Use Gemini to auto-generate new lies from the dataset so every playthrough feels different.
- Deeper contradiction mechanics. Cross-witness contradictions (witness A's testimony contradicts witness B's stats), multi-stat lies, and type-effectiveness evidence.
- Different datasets, different games. The courtroom engine is not tied to Pokemon. Swap in a sports stats dataset, financial data, or historical records and you get entirely different trials. The core mechanic — compare claims against real data — works everywhere.
- Multiplayer trials. One player prosecutes, one defends, AI witnesses respond to both.
Built With
- gemini
- nextjs
- react
- tailwind
- three.js
- typescript
Log in or sign up for Devpost to join the conversation.