Inspiration
I’ve always been fascinated by the power of live, structured debate to sharpen critical thinking. At the same time, I wanted to explore how AI could both generate engaging topics and judge arguments in real time. Combining trending-topic discovery with a 1 v 1 debate platform felt like the perfect playground to learn about WebSockets, real-time matchmaking, and modern LLM integrations. What it does
Fetches trending topics globally or by location using the Qloo API.
Matches two human players (or human vs AI) into a 1 v 1 debate via WebSockets.
Streams each user’s messages in “rounds” and uses ChatGPT to:
Analyze each message for bias, factual accuracy, and conduct.
Score and award points per round (e.g.
roundScore=b+f+croundScore=b+f+c
where bb, ff, cc are bias, factual and conduct scores).
Keeps time with per-round timers, auto-advances rounds, and declares a winner when the debate ends.
Persists everything in PostgreSQL via Prisma—users, debates, messages, stats, and historical win/loss records.
How we built it
Backend in TypeScript with Fastify for REST + WebSocket routes.
Database schema in Prisma (PostgreSQL) modeling users, queue entries, debates, messages, and aggregated stats.
Matchmaking service: atomic PostgreSQL transactions to pair FOR vs AGAINST under geo/topic filters.
WebSocket Hub (ws) for live matchmaking and debate events (READY, QUEUED, MATCHED, MESSAGE, TIMER, ROUND_SCORED, DEBATE_ENDED, etc.).
QlooService for trend lookup + disk-persisted cache (JSON file) to minimize external API calls.
DebateService orchestrating debate lifecycle, timers, LLM judging, AI opponent replies, and stat upserts.
OpenAI Chat API (GPT-4o-mini) with temperature=0 for reproducible scoring and title generation.
Challenges we ran into
TypeScript + ESM import quirks (making .ts imports work under ts-node/esm).
WebSocket typing mismatches—figuring out how to type conn vs socket without SocketStream.
Building a fair, concurrent matchmaking algorithm in a multi-user setting without race conditions.
Designing a persistent cache (memory + on-disk JSON) that “never expires” until manually cleared.
Handling mid-debate disconnects gracefully and still persisting wins based on current scores.
Accomplishments that we’re proud of
A full-feature demo: browse hot topics, sign up / JWT-gate, queue, debate, get judged, and see your stats update.
Seamless human vs AI mode with instant pairing against a “Synth-AI” side.
Swagger documentation for every REST endpoint, and Postman-ready examples to test signup, signin, topic fetch, etc.
Robust error handling and automated recovery (e.g. retrying Qloo calls on “at least one valid filter” errors).
A reproducible scoring pipeline where each message’s three sub-scores roll up into a round score, which then rolls up into user stats.
What we learned
The ins and outs of Prisma relations, cascading deletes, and raw transactions for safe matchmaking.
How to architect a WebSocket-driven real-time app that gracefully scales to many debates in flight.
Best practices for LLM prompt engineering—from topic titles to judgeRound payloads.
Building a persistent cache that survives process restarts yet can be invalidated on demand.
Deeper appreciation for UX timing—making sure both players see the same countdown and round advancement.
What’s next for AI Powered Debate Hub
Tournament mode: bracket-style play and live leaderboards.
Mobile clients with native WebSocket support and push notifications.
Advanced judging: incorporate sentiment analysis, citation checks, and richer feedback.
Community features: public profiles, follower feeds, and debate replays.
Multilingual support: let users debate in any language, with real-time translation and localized prompts.
Built With
- chatgpt
- fastify
- nextjs
- openai
- prisma
- websockets
Log in or sign up for Devpost to join the conversation.